path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
labs/2019-02-28_Lab07/vierthaler-stylometry/uvastylometry.ipynb | ###Markdown
Virginia: StylometryIn this jupyter notebook I will cover the basics of performing stylometric analysis on a large collection of texts. Using this notebookThe code in this notebook is distributed across a few different code blocks. You will need to run them top to bottom, but the actual analysis does not happen until the last block. To run everything, simply click on "Run All" in the "Cell" menu. I have also provided a plain python file that you can run from the commmand line. Importing necessary librariesThis code block imports the libraries I will use.
###Code
import re, os, sys, platform, json
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import normalize
from sklearn.decomposition import PCA
# Plotting libraries
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.font_manager
import matplotlib.colors
###Output
_____no_output_____
###Markdown
Adjustable parameters: AnalysisThe parameters that you might want to adjust for your analysis are contained in the following code block.**ngrams** is an integer that will determine the size of n-gram you use for analysis. 1 will look at single words, 2 will look at two at a time, 3 will look at three at a time, etc. 1 grams work best for most analyses. More than 3 will be slow and often result in very sparse data that is hard to interpret.**commonWords** is an integer that determines how frequent a character must be in the corpus to be considered in the stylometric analysis. 500 will use the 500 most common words across all texts. You can set this to None if you do not want to limit words in this way**limitVocab** is a boolean (True or False). Set to True if you want to specify a specific vocabulary**limitVocabularyFile** is the name of a file that contains the vocabulary you are interested in. The file should have one token per line. This line is only read if limitVocab is set to True.
###Code
# Size of n-grams:
ngrams = 1
# Limit the number of words to look at
commonWords = 500
# Set the vocabulary you are interested in
limitVocab = False
# Vocabulary file
limitVocabularyFile = "vocab.txt"
###Output
_____no_output_____
###Markdown
Adjustable parameters: AppearanceThese parameters will help you set the appearance of the plot itself.**labelTypes** is a tuple that specifies the nature of the corpus labeling. Here, the sample corpus files are all named with the convention author_title_section_genre.txt. Each type of label is one element in this tuple, in the same order they appear in the name**colorValue** this integer specifies which label should be used to generate a color scheme for the plot. 2 points to the 3rd element in the tuple, the siku categorization. There are three different siku categories reflected in the dataset, making this a good option. Here you should pick whichever label your analysis is focused on. More than 8 or so elements, however, will generate colors that are hard to tell apart.**labelValue** this integer specifies which label should be used for labeling the points in the plot. 0 points to the 1st element in the tuple, the title.**pointSize** is an integer that sets how large the points in the plot tare**pointLabels** is a boolean (True or False) that specifies if the points should be labeled.**plotLoadings** is a boolean that specifies if the vocabulary should be drawn on the plot (which will aid in interpretation). The further a term is from the center of the plot, the more it is influencing texts in a given direction.**hidePoints** is a boolean that specifies of the points should be drawn. Set to False to see the loadings better.**outputDimensions** is a tuple that sets the width and height of the output plot in inches. The inner values can be either integers or floats.**outputFile** contains the name of the outputfile, where the plot will be saved. The file extension will determine file type. png, pdf, jpg, tif, and others are all valid selections. On Macs, because of an oddity of the plotting library, pdfs will be very large. You can fix this by opening the file with adobe illustrator (or another similar program) and then saving a copy. This is because the entire font is embedded in the file.
###Code
# Types of labels for documents in the corpus
labelTypes = ('author', 'title', 'section', 'genre') # tuple with strings
# Index of label used to set Color:
colorValue = 3 # Index of label to use for color (integer). Here 2 points to "siku"
# Index of label to use for plot labels (if points are labeled)
labelValue = 0 # Index of label to use for labels (integer). Here 0 points to "title"
# Point size (integer)
pointSize = 8
# Show point labels (add labels for each text):
pointLabels = False # True or False
# Plot loadings (write the characters tot he plot)
plotLoadings = False # True or False
# Hide points (useful for seeing loadings better):
hidePoints = False # True or False
# Output file info (dimensions are in inches (width, height)):
outputDimensions = (10, 7.5) # Tuple of integers or floats
# Output file extension determines output type. Save as a pdf if you want to edit in illustator
# PDF Output on mac is very large, but just opening and saving a copy in illustrator will fix this
outputFile = "myfigure.png"
###Output
_____no_output_____
###Markdown
Adjustable Parameters: no need to changeThis parameters can be adjusted, but you may as well leave them as they are.**pcaComponents** is an integer that sets how many principal components should be calculated. We are only using two in this analysis, but as you work more with these plots, you can consider setting this higher (but you will also have to adjust later parts of the script to make them do anything). The maximum this can be is the number of variables (here the 500 words) minus one (so 499 in this case).**corpusFolder** is the name of the folder that holds the corpus files. Just leave this as "corpus" if you put your files in a folder called "corpus".**removeItemsFile** is a string that points to words (tokens) that you want to remove from consideration. Each token to be removed should be on a line in the specified file.
###Code
# How many components?
pcaComponents = 2 # Only useful for digging even deeper in the data
# Input folder
corpusFolder = "corpus"
# Items to remove from consideration:
removeItemsFile = "remove.txt"
###Output
_____no_output_____
###Markdown
Nothing beyond here needs editing!!The comments in the code itself explain what is happening. If you run the script from a terminal, it will open a new window with your plot. It will look like the code keeps running until you close this window. This is an interactive explorer you can use to study the plot itself. Here it will just insert the figure after the codeblock.
###Code
####################
# Type Enforcement #
####################
# This section enforces the input values for all the adjustable variables. This
# is to make sure the script isn't run incorrectly.
# function to check values
def valueChecker(varname, typeofobj, value):
if type(typeofobj) == type:
if typeofobj == bool and type(value) != typeofobj:
print(f"{varname} must be a {typeofobj} (True or False). Please fix to run script.")
sys.exit()
if type(value) != typeofobj:
print(f"{varname} must be {typeofobj}. Please fix to run script.")
sys.exec_info()
sys.exit()
elif type(typeofobj) == tuple:
if type(value) != typeofobj[0] and type(value) != typeofobj[1]:
print(f"{varname} must be {typeofobj[0]} or {typeofobj[1]}. Please fix to run script.")
sys.exit()
# check values
valueChecker('ngrams', int, ngrams)
valueChecker('commonWords', (int, None), commonWords)
valueChecker('limitVocab', bool, limitVocab)
valueChecker('colorValue', int, colorValue)
valueChecker('labelValue', int, labelValue)
valueChecker('pointSize', int, pointSize)
valueChecker('pointLabels', bool, pointLabels)
valueChecker('plotLoadings', bool, plotLoadings)
valueChecker('hidePoints', bool, hidePoints)
valueChecker('outputFile', str, outputFile)
valueChecker('pcaComponents', int, pcaComponents)
valueChecker('corpusFolder', str, corpusFolder)
valueChecker('removeItemsFile', str, removeItemsFile)
# check tuples and internal values
if type(labelTypes) != tuple:
print('labelTypes must be a tuple. Please fix to run script.')
sys.exit()
else:
for lab in labelTypes:
valueChecker('labelType item', str, lab)
if type(outputDimensions) != tuple:
print(f"outputDimensions must be {tuple}. Please fix to run the script")
else:
for d in outputDimensions:
valueChecker("outerDimension value", (float, int), d)
# Load in external files
try:
removeItems = []
with open(removeItemsFile, "r", encoding='utf8') as rf:
removeItems = [item.strip() for item in rf.read().split("\n") if item != ""]
except FileNotFoundError:
print(f"No file named {removeItemsFile} found. Please check filename or create the file.")
sys.exit()
if limitVocab == True:
valueChecker('limitVocabularyFile', str, limitVocabularyFile)
try:
limitVocabulary = []
with open(limitVocabularyFile, "r", encoding='utf8') as rf:
limitVocabulary = [item.strip() for item in rf.read().split("\n") if item != ""]
if commonWords:
print(f"You are limiting analysis to the {commonWords} most common words but also using a set vocabulary.")
print("If you want to avoid unexpected behavior, set commonWords to None when limiting vocab.")
except FileNotFoundError:
print(f"No file named {limitVocabularyFile} found. Please check filename or create the file")
print("Defaulting to no limit on the vocabulary")
limitVocabulary = None
else:
limitVocabulary = None
# Ensure corpus folder exists
if not os.path.isdir(corpusFolder):
print(f"Could not find the corpus folder '{corpusFolder}'. Please double check.")
sys.exit()
########################
# Function definitions #
########################
# Function to clean the text. Remove desired characters and white space.
def clean(text, removeitems):
for item in removeitems:
text = text.replace(item, "")
text = re.sub("\s+", " ", text)
return text
##############
# Load Texts #
##############
print("Loading, cleaning, and tokenizing")
# Go through each document in the corpus folder and save info to lists
texts = []
labels = []
for root, dirs, files in os.walk(corpusFolder):
for i, f in enumerate(files):
if f not in {'.DS_Store'}:
# add the labels to the label list
labels.append(f[:-4].split("_"))
# Open the text, clean it, and tokenize it
with open(os.path.join(root,f),"r", encoding='utf8', errors='ignore') as rf:
texts.append(clean(rf.read(), removeItems))
if i == len(files) - 1:
print(f"\r{i+1} of {len(files)} processed", end='\n', flush=True)
else:
print(f"\r{i+1} of {len(files)} processed", end='', flush=True)
####################
# Perform Analysis #
####################
print("Vectorizing")
countVectorizer = TfidfVectorizer(max_features=commonWords, use_idf=False, vocabulary=limitVocabulary, ngram_range=(ngrams, ngrams))
countMatrix = countVectorizer.fit_transform(texts)
print("Normalizing values")
countMatrix = normalize(countMatrix)
countMatrix = countMatrix.toarray()
print("Performing PCA")
# Lets perform PCA on the countMatrix:
pca = PCA(n_components=pcaComponents)
myPCA = pca.fit_transform(countMatrix)
##############
# Plot Setup #
##############
print("Setting plot info")
# set the plot size
plt.figure(figsize=outputDimensions)
# find all the unique values for each of the label types
uniqueLabelValues = [set() for i in range(len(labelTypes))]
for labelList in labels:
for i, label in enumerate(labelList):
uniqueLabelValues[i].add(label)
# create color dictionaries for all labels
colorDictionaries = []
for uniqueLabels in uniqueLabelValues:
colorpalette = sns.color_palette("husl",len(uniqueLabels)).as_hex()
colorDictionaries.append(dict(zip(uniqueLabels,colorpalette)))
# Now we need the Unique Labels
uniqueColorLabels = list(uniqueLabelValues[colorValue])
# Let's get a number for each class
numberForClass = [i for i in range(len(uniqueColorLabels))]
# Make a dictionary! This is new sytax for us! It just makes a dictionary where
# the keys are the unique years and the values are found in numberForClass
labelForClassNumber = dict(zip(uniqueColorLabels,numberForClass))
# Let's make a new representation for each document that is just these integers
# and it needs to be a numpy array
textClass = np.array([labelForClassNumber[lab[colorValue]] for lab in labels])
# Make a list of the colors
colors = [colorDictionaries[colorValue][lab] for lab in uniqueColorLabels]
if hidePoints:
pointSize = 0
###################
# Create the plot #
###################
print("Plotting texts")
for col, classNumber, lab in zip(colors, numberForClass, uniqueColorLabels):
plt.scatter(myPCA[textClass==classNumber,0],myPCA[textClass==classNumber,1],label=lab,c=col, s=pointSize)
# Let's label individual points so we know WHICH document they are
if pointLabels:
print("Adding Labels")
for lab, datapoint in zip(labels, myPCA):
plt.annotate(str(lab[labelValue]),xy=datapoint)
# Let's graph component loadings
vocabulary = countVectorizer.get_feature_names()
loadings = pca.components_
if plotLoadings:
print("Rendering Loadings")
for i, word in enumerate(vocabulary):
plt.annotate(word, xy=(loadings[0, i], loadings[1,i]))
# Let's add a legend! matplotlib will make this for us based on the data we
# gave the scatter function.
plt.legend()
plt.savefig(outputFile)
############################################
# Output data for JavaScript Visualization #
############################################
data = []
for datapoint in myPCA:
pcDict = {}
for i, dp in enumerate(datapoint):
pcDict[f"PC{str(i + 1)}"] = dp
data.append(pcDict)
jsLoadings = []
for i, word in enumerate(vocabulary):
temploading = {}
for j,dp in enumerate(loadings):
temploading[f"PC{str(j+1)}"] = dp[i]
jsLoadings.append([word, temploading])
colorDictionaryList = []
for cd in colorDictionaries:
cdlist = [v for v in cd.values()]
colorDictionaryList.append(cdlist)
colorstrings = json.dumps(colorDictionaryList)
labelstrings = json.dumps(labels)
valuetypes = json.dumps([k for k in data[0].keys()])
datastrings = json.dumps(data)
limitedlabeltypes = []
for i, t in enumerate(labelTypes):
if len(uniqueLabelValues[i]) <= 20:
limitedlabeltypes.append(t)
cattypestrings = json.dumps(limitedlabeltypes)
loadingstrings = json.dumps(jsLoadings)
stringlist = [f"var colorDictionaries = {colorstrings};", f"var labels = {labelstrings};",
f"var data = {datastrings};", f"var categoryTypes = {list(labelTypes)};",
f"var loadings = {jsLoadings};", f"var valueTypes = {valuetypes};",
f"var limitedCategories = {limitedlabeltypes};",
f"var activecatnum = {colorValue};", f"var activelabelnum = {labelValue};"]
with open("data.js", "w", encoding="utf8") as wf:
wf.write("\n".join(stringlist))
# Show the plot
plt.show()
###Output
Loading, cleaning, and tokenizing
320 of 320 processed
Vectorizing
Normalizing values
Performing PCA
Setting plot info
Plotting texts
|
Noise.ipynb | ###Markdown
Binary Matrix
###Code
def generateMatrix(n):
arr = []
for i in range(n):
row = []
for j in range(n):
if i <= j:
row = [0] + row
else:
row = [1] + row
arr = [row] + arr
return np.array(arr)
class PercentageFlipNoise:
def __init__(self, noisePercentage):
self.noisePercentage = noisePercentage
def apply_noise(self, D):
n = len(D)
num_flips = (np.square(n) - n) * self.noisePercentage
unique_elems = set()
for flip in range(int(num_flips)):
i, j = random.sample(range(n), 2)
while ((i, j) in unique_elems): i, j = random.sample(range(n), 2)
unique_elems.add((i, j))
D[i][j] = 1 - D[i][j]
###Output
_____no_output_____
###Markdown
n = 10
###Code
for i in range(6):
plt.subplot(2, 3, i+1)
plt.imshow(generateMatrix(10))
noisePercentages = [0.01, 0.05, 0.1, 0.25, 0.5, 0.75]
for i in range(6):
plt.subplot(2, 3, i+1)
noisePercentage = noisePercentages[i]
x = PercentageFlipNoise(noisePercentage)
matrix = generateMatrix(10)
x.apply_noise(matrix)
plt.imshow(matrix)
###Output
_____no_output_____
###Markdown
n = 100
###Code
for i in range(6):
plt.subplot(2, 3, i+1)
plt.imshow(generateMatrix(100))
noisePercentages = [0.01, 0.05, 0.1, 0.25, 0.5, 0.75]
for i in range(6):
plt.subplot(2, 3, i+1)
noisePercentage = noisePercentages[i]
x = PercentageFlipNoise(noisePercentage)
matrix = generateMatrix(100)
x.apply_noise(matrix)
plt.imshow(matrix)
from scipy import stats
rankingvector1 = [5, 4, 3, 2, 1]
rankingvector2 = [1, 2, 3, 4, 5]
tau, p_value = stats.kendalltau(rankingvector1, rankingvector2)
tau, p_value
from scipy import stats
rankingvector1 = [5, 4, 3, 2, 1]
rankingvector2 = [5, 4, 3, 2, 1]
tau, p_value = stats.kendalltau(rankingvector1,rankingvector2)
tau, p_value
###Output
_____no_output_____
###Markdown
Data Genration and noise isolation by prediction of variables of composite function signals.
###Code
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import norm
import random
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
#fuctions
x=np.linspace(1, 10, 150)
#linear
def linear(m,c):
y=m*x+c
return y
#gaussian
def gaussian(mu,sigma,a):
gu=((a * np.exp( - (x - mu)**2 / (2 * sigma**2) )))
return gu
# genration of signals
def calc():
m=random.uniform(.1,2)
mu=random.uniform(3,6)
sigma=random.uniform(.1,2)
c=random.uniform(0,3)
a=random.uniform(2,6)
noise=(np.random.normal(0,.1,150))
li=linear(m,c)
gaus=gaussian(mu,sigma,a)
sig=li+gaus+noise
return sig,m,mu,sigma,c,a,x
#genrate dataset with 500 values
signal=[ calc() for i in range(2000)]
#signal is a numpy array
#genarate dataframes
df = pd.DataFrame(signal)
signals=(df[0])
m=df[1]
mu=df[2]
sigma=df[3]
c=df[4]
a=df[5]
x=df[6]
#proper Array conversion
signw=[[ signals[i][j] for j in range(150)] for i in range(2000)]
###Output
_____no_output_____
###Markdown
Data Saving
###Code
#form a pandas dataframe
data={'signal':signw,
'mu':df[2],
'sigma':df[3],
'amplitude':df[5],
'slope':df[1],
'constant':df[4]
}
Dataset2 =pd.DataFrame(data,columns = ['signal', 'mu', 'sigma', 'amplitude','slope','constant'])
#save data to CSV
Dataset2.to_csv('signal.csv')
Dataset2[:10]
###Output
_____no_output_____
###Markdown
SVR Prediction Module
###Code
#SVR for prediction for M
X_train, X_test, y_train, y_test = train_test_split(signw,m,test_size=0.5)
from sklearn.svm import SVR
clf = SVR(C=1.0, epsilon=0.2)
clf.fit(X_train,y_train)
SVR(C=1.0, cache_size=2002, coef0=0.0, degree=3, epsilon=0.2, gamma='auto',
kernel='rbf', max_iter=-10, shrinking=True, tol=0.001, verbose=False)
clf.predict(X_test)
y1=clf.score(X_test,y_test)
y1
#SVR for prediction C
X_train, X_test, y_train, y_test = train_test_split(signw,c,test_size=0.5)
from sklearn.svm import SVR
clf = SVR(C=1.0, epsilon=0.2)
clf.fit(X_train,y_train)
SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.2, gamma='auto',
kernel='rbf', max_iter=-10, shrinking=True, tol=0.001, verbose=False)
clf.predict(X_test)
y2=clf.score(X_test,y_test)
y2
#SVR for prediction A
X_train, X_test, y_train, y_test = train_test_split(signw,a,test_size=0.5)
from sklearn.svm import SVR
clf = SVR(C=1.0, epsilon=0.2)
clf.fit(X_train,y_train)
SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma='auto',
kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False)
clf.predict(X_test)
y3=clf.score(X_test,y_test)
y3
#SVR for prediction mu
X_train, X_test, y_train, y_test = train_test_split(signw,mu,test_size=0.5)
from sklearn.svm import SVR
clf = SVR(C=1.0, epsilon=0.2)
clf.fit(X_train,y_train)
SVR(C=1.0, cache_size=2000, coef0=0.1, degree=3, epsilon=0.5, gamma='auto',
kernel='rbf', max_iter=-10, shrinking=True, tol=0.011, verbose=False)
clf.predict(X_test)
y4=clf.score(X_test,y_test)
y4
#SVR for prediction sigma
X_train, X_test, y_train, y_test = train_test_split(signw,sigma,test_size=0.5)
from sklearn.svm import SVR
clf = SVR(C=1.0, epsilon=0.2)
clf.fit(X_train,y_train)
SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.2, gamma='auto',
kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False)
clf.predict(X_test)
y5=clf.score(X_test,y_test)
y5
avg=(y1+y2+y3+y4+y5)/5
print('Average Accuracy of SVR for four parameters for a dataset of 1000 values is ',avg*100,'%')
###Output
Average Accuracy of SVR for four parameters for a dataset of 1000 values is 85.782250132 %
###Markdown
Descision forest regression
###Code
from sklearn.ensemble import RandomForestRegressor
from sklearn.datasets import make_regression
#for prediction of M
X_train, X_test, y_train, y_test = train_test_split(signw,m,test_size=0.5)
regr = RandomForestRegressor(max_depth=4, random_state=0)
regr.fit(X_train, y_train)
RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=2,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
oob_score=False, random_state=0, verbose=0, warm_start=False)
y_res=regr.predict(X_test)
y11=regr.score(X_test,y_test)
y11
#for prediction of C
X_train, X_test, y_train, y_test = train_test_split(signw,c,test_size=0.5)
regr = RandomForestRegressor(max_depth=4, random_state=0)
regr.fit(X_train, y_train)
RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=3,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=4,
oob_score=False, random_state=0, verbose=0, warm_start=False)
y_res=regr.predict(X_test)
y22=regr.score(X_test,y_test)
y22
#for prediction of a
X_train, X_test, y_train, y_test = train_test_split(signw,a,test_size=0.5)
regr = RandomForestRegressor(max_depth=4, random_state=0)
regr.fit(X_train, y_train)
RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=2,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
oob_score=False, random_state=0, verbose=0, warm_start=False)
y_res=regr.predict(X_test)
y33=regr.score(X_test,y_test)
y33
#for prediction of mu
X_train, X_test, y_train, y_test = train_test_split(signw,mu,test_size=0.5)
regr = RandomForestRegressor(max_depth=4, random_state=0)
regr.fit(X_train, y_train)
RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=2,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
oob_score=False, random_state=0, verbose=0, warm_start=False)
y_res=regr.predict(X_test)
y44=regr.score(X_test,y_test)
y44
#for prediction of sigma
X_train, X_test, y_train, y_test = train_test_split(signw,sigma,test_size=0.5)
regr = RandomForestRegressor(max_depth=4, random_state=0)
regr.fit(X_train, y_train)
RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=2,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
oob_score=False, random_state=0, verbose=0, warm_start=False)
y_res=regr.predict(X_test)
y55=regr.score(X_test,y_test)
y55
avg2=(y11+y22+y33+y44+y55)/5
print('Average Accuracy of Descision forest regressor for four parameters for a dataset of 1000 values is ',avg2*100,'%')
###Output
Average Accuracy of Descision forest regressor for four parameters for a dataset of 1000 values is 65.2735977949 %
###Markdown
Boosted Decision tree regression
###Code
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import AdaBoostRegressor
#for prediction of M
X_train, X_test, y_train, y_test = train_test_split(signw,m,test_size=0.5)
regr= AdaBoostRegressor(DecisionTreeRegressor(max_depth=4),
n_estimators=300)
regr.fit(X_train, y_train)
regr.predict(X_test)
g1=regr.score(X_test,y_test)
g1
#for perdication of C
X_train, X_test, y_train, y_test = train_test_split(signw,c,test_size=0.5)
regr= AdaBoostRegressor(DecisionTreeRegressor(max_depth=12),
n_estimators=3000)
regr.fit(X_train, y_train)
regr.predict(X_test)
g2=regr.score(X_test,y_test)
g2
#for prediction of a
X_train, X_test, y_train, y_test = train_test_split(signw,a,test_size=0.5)
regr= AdaBoostRegressor(DecisionTreeRegressor(max_depth=12),
n_estimators=3000)
regr.fit(X_train, y_train)
regr.predict(X_test)
g3=regr.score(X_test,y_test)
g3
#for prediction of MU
X_train, X_test, y_train, y_test = train_test_split(signw,mu,test_size=0.5)
regr= AdaBoostRegressor(DecisionTreeRegressor(max_depth=10),
n_estimators=3000)
regr.fit(X_train, y_train)
regr.predict(X_test)
g4=regr.score(X_test,y_test)
g4
#for predication of sigma
X_train, X_test, y_train, y_test = train_test_split(signw,sigma,test_size=0.5)
regr= AdaBoostRegressor(DecisionTreeRegressor(max_depth=12),
n_estimators=4000)
regr.fit(X_train, y_train)
regr.predict(X_test)
g5=regr.score(X_test,y_test)
g5
avg3=(g1+g2+g3+g4+g5)/5
print('Average Accuracy of boosted Descision Tree for four parameters for a dataset of 1000 values is ',avg3*100,'%')
d = {'No':[1,2,3],
'Algo': ['SVR', 'DFR','BDTR'],
'Auc M': [y1,y11,g1],
'Auc C':[y2,y22,g2],
'Auc A':[y3,y33,g3],
'Auc MU':[y4,y44,g4],
'Auc Sigma': [y5,y55,g5],
'Avg':[avg,avg2,avg3]}
dff = pd.DataFrame(data=d)
dff =dff.set_index('No').reset_index()
dff
###Output
_____no_output_____ |
pmjp.ipynb | ###Markdown
RELATÓRIO DOS DADOS FINANCEIROS DA PREFEITURA DE JOÃO PESSOA DE 2009 A 2018 DESPESAS
###Code
df_despesas = pd.read_csv ('tabela_despesas_detalhamento.csv', sep='|')
df_despesas.head()
df_despesas.describe()
dez_mais = df_despesas.nlargest(10,'valor_pago')
dez_mais.head()
valores = df_despesas['valor_pago']
data = df_despesas['data_empenho']
plt.figure(1, figsize=(15, 9))
ind = np.arange(len(data))
plt.scatter (ind, valores)
plt.ylabel('bilhões')
plt.title('valores pagos de 2009 à 2018')
plt.grid(True)
plt.savefig('dispersao_receita.png', transparent=True)
plt.show()
ano_2009 = df_despesas.loc[df_despesas['ano_empenho']==2009]
ano_2010 = df_despesas.loc[df_despesas['ano_empenho']==2010]
ano_2011 = df_despesas.loc[df_despesas['ano_empenho']==2011]
ano_2012 = df_despesas.loc[df_despesas['ano_empenho']==2012]
ano_2013 = df_despesas.loc[df_despesas['ano_empenho']==2013]
ano_2014 = df_despesas.loc[df_despesas['ano_empenho']==2014]
ano_2015 = df_despesas.loc[df_despesas['ano_empenho']==2015]
ano_2016 = df_despesas.loc[df_despesas['ano_empenho']==2016]
ano_2017 = df_despesas.loc[df_despesas['ano_empenho']==2017]
ano_2018 = df_despesas.loc[df_despesas['ano_empenho']==2018]
ano_2009_total = ano_2009['valor_pago'].sum()
ano_2010_total = ano_2010['valor_pago'].sum()
ano_2011_total = ano_2011['valor_pago'].sum()
ano_2012_total = ano_2012['valor_pago'].sum()
ano_2013_total = ano_2013['valor_pago'].sum()
ano_2014_total = ano_2014['valor_pago'].sum()
ano_2015_total = ano_2015['valor_pago'].sum()
ano_2016_total = ano_2016['valor_pago'].sum()
ano_2017_total = ano_2017['valor_pago'].sum()
ano_2018_total = ano_2018['valor_pago'].sum()
grafico_despesa = [ano_2009_total, ano_2010_total, ano_2011_total, ano_2012_total, ano_2013_total, ano_2014_total, ano_2015_total, ano_2016_total, ano_2017_total, ano_2018_total]
anos = ['2009', '2010', '2011', '2012','2013', '2014', '2015', '2016', '2017', '2018']
plt.figure(1, figsize=(15, 9))
plt.plot (anos, grafico_despesa, label='Despesas')
plt.ylabel('Bilhões')
plt.title('despesa por anos')
plt.grid(True)
plt.legend()
plt.savefig('despesa.png', transparent=True)
plt.show()
###Output
_____no_output_____
###Markdown
RECEITA
###Code
df_2009 = pd.read_excel ('tabela_download2009.xlsx')
df_2010 = pd.read_excel ('tabela_download2010.xlsx')
df_2011 = pd.read_excel ('tabela_download2011.xlsx')
df_2012 = pd.read_excel ('tabela_download2012.xlsx')
df_2013 = pd.read_excel ('tabela_download2013.xlsx')
df_2014 = pd.read_excel ('tabela_download2014.xlsx')
df_2015 = pd.read_excel ('tabela_download2015.xlsx')
df_2016 = pd.read_excel ('tabela_download2016.xlsx')
df_2017 = pd.read_excel ('tabela_download2017.xlsx')
df_2018 = pd.read_excel ('tabela_download2018.xlsx')
df_2018.head()
df_2009_total = df_2009['valor'].sum()
df_2010_total = df_2010['valor'].sum()
df_2011_total = df_2011['valor'].sum()
df_2012_total = df_2012['valor'].sum()
df_2013_total = df_2013['valor'].sum()
df_2014_total = df_2014['valor'].sum()
df_2015_total = df_2015['valor'].sum()
df_2016_total = df_2016['valor'].sum()
df_2017_total = df_2017['valor'].sum()
df_2018_total = df_2018['valor'].sum()
df_2009_media = df_2009['valor'].mean()
df_2010_media = df_2010['valor'].mean()
df_2011_media = df_2011['valor'].mean()
df_2012_media = df_2012['valor'].mean()
df_2013_media = df_2013['valor'].mean()
df_2014_media = df_2014['valor'].mean()
df_2015_media = df_2015['valor'].mean()
df_2016_media = df_2016['valor'].mean()
df_2017_media = df_2017['valor'].mean()
df_2018_media = df_2018['valor'].mean()
receita_total = [df_2009_total, df_2010_total, df_2011_total, df_2012_total, df_2013_total, df_2014_total, df_2015_total, df_2016_total,
df_2017_total, df_2018_total]
receita_media = [df_2009_media, df_2010_media, df_2011_media, df_2012_media, df_2013_media, df_2014_media, df_2015_media, df_2016_media, df_2017_media, df_2018_media]
plt.figure(1, figsize=(15, 9))
plt.plot (anos, receita_total, label='Receita')
plt.ylabel('Bilhões')
plt.title('receita por anos')
plt.grid(True)
plt.legend()
plt.savefig('receita.png', transparent=True)
plt.show()
plt.figure(1, figsize=(15, 9))
plt.plot (anos, receita_media, label='Média')
plt.ylabel('milhões')
plt.title('Receita média por Anos')
plt.grid(True)
plt.legend()
plt.savefig('receita_media.png', transparent=True)
plt.show()
plt.figure(1, figsize=(20, 4))
names = ['Despesa', 'Receita', 'Despesa vs Receita']
values = [1, 10, 100]
plt.subplot(131)
plt.plot(anos, grafico_despesa)
plt.title('Despesas por anos')
plt.grid(True)
plt.subplot(132)
plt.plot(anos, receita_total)
plt.title('Receitas por Anos')
plt.grid(True)
plt.subplot(133)
plt.plot(anos, grafico_despesa, label='despesa')
plt.plot(anos, receita_total, label='receita')
plt.title('Receita vs Despesa por Anos')
plt.grid(True)
plt.savefig('quadroreceitadespesa.png', transparent=True)
plt.show()
###Output
_____no_output_____
###Markdown
FUNCIONÁRIOS DA PREFEITURA
###Code
jan_2009 = pd.read_csv ('tabela_pessoal2009jan.csv', sep='|')
fev_2009 = pd.read_csv ('tabela_pessoal2009fev.csv', sep='|')
mar_2009 = pd.read_csv ('tabela_pessoal2009fev.csv', sep='|')
abr_2009 = pd.read_csv ('tabela_pessoal2009abr.csv', sep='|')
mai_2009 = pd.read_csv ('tabela_pessoal2009mai.csv', sep='|')
jun_2009 = pd.read_csv ('tabela_pessoal2009jun.csv', sep='|')
jul_2009 = pd.read_csv ('tabela_pessoal2009jul.csv', sep='|')
ago_2009 = pd.read_csv ('tabela_pessoal2009ago.csv', sep='|')
set_2009 = pd.read_csv ('tabela_pessoal2009set.csv', sep='|')
out_2009 = pd.read_csv ('tabela_pessoal2009out.csv', sep='|')
nov_2009 = pd.read_csv ('tabela_pessoal2009nov.csv', sep='|')
dez_2009 = pd.read_csv ('tabela_pessoal2009dez.csv', sep='|')
jan_2010 = pd.read_csv ('tabela_pessoal2010jan.csv', sep='|')
fev_2010 = pd.read_csv ('tabela_pessoal2010fev.csv', sep='|')
mar_2010 = pd.read_csv ('tabela_pessoal2010mar.csv', sep='|')
abr_2010 = pd.read_csv ('tabela_pessoal2010abr.csv', sep='|')
mai_2010 = pd.read_csv ('tabela_pessoal2010mai.csv', sep='|')
jun_2010 = pd.read_csv ('tabela_pessoal2010jun.csv', sep='|')
jul_2010 = pd.read_csv ('tabela_pessoal2010jul.csv', sep='|')
ago_2010 = pd.read_csv ('tabela_pessoal2010ago.csv', sep='|')
set_2010 = pd.read_csv ('tabela_pessoal2010set.csv', sep='|')
out_2010 = pd.read_csv ('tabela_pessoal2010out.csv', sep='|')
nov_2010 = pd.read_csv ('tabela_pessoal2010nov.csv', sep='|')
dez_2010 = pd.read_csv ('tabela_pessoal2010dez.csv', sep='|')
jan_2011 = pd.read_csv ('tabela_pessoal2011jan.csv', sep='|')
fev_2011 = pd.read_csv ('tabela_pessoal2011fev.csv', sep='|')
mar_2011 = pd.read_csv ('tabela_pessoal2011mar.csv', sep='|')
abr_2011 = pd.read_csv ('tabela_pessoal2011abr.csv', sep='|')
mai_2011 = pd.read_csv ('tabela_pessoal2011mai.csv', sep='|')
jun_2011 = pd.read_csv ('tabela_pessoal2011jun.csv', sep='|')
jul_2011 = pd.read_csv ('tabela_pessoal2011jul.csv', sep='|')
ago_2011 = pd.read_csv ('tabela_pessoal2011ago.csv', sep='|')
set_2011 = pd.read_csv ('tabela_pessoal2011set.csv', sep='|')
out_2011 = pd.read_csv ('tabela_pessoal2011out.csv', sep='|')
nov_2011 = pd.read_csv ('tabela_pessoal2011nov.csv', sep='|')
dez_2011 = pd.read_csv ('tabela_pessoal2011dez.csv', sep='|')
jan_2012 = pd.read_csv ('tabela_pessoal2012jan.csv', sep='|')
fev_2012 = pd.read_csv ('tabela_pessoal2012fev.csv', sep='|')
mar_2012 = pd.read_csv ('tabela_pessoal2012mar.csv', sep='|')
abr_2012 = pd.read_csv ('tabela_pessoal2012abr.csv', sep='|')
mai_2012 = pd.read_csv ('tabela_pessoal2012mai.csv', sep='|')
jun_2012 = pd.read_csv ('tabela_pessoal2012jun.csv', sep='|')
jul_2012 = pd.read_csv ('tabela_pessoal2012jul.csv', sep='|')
ago_2012 = pd.read_csv ('tabela_pessoal2012ago.csv', sep='|')
set_2012 = pd.read_csv ('tabela_pessoal2012set.csv', sep='|')
out_2012 = pd.read_csv ('tabela_pessoal2012out.csv', sep='|')
nov_2012 = pd.read_csv ('tabela_pessoal2012nov.csv', sep='|')
dez_2012 = pd.read_csv ('tabela_pessoal2012dez.csv', sep='|')
jan_2013 = pd.read_csv ('tabela_pessoal2013jan.csv', sep='|')
fev_2013 = pd.read_csv ('tabela_pessoal2013fev.csv', sep='|')
mar_2013 = pd.read_csv ('tabela_pessoal2013mar.csv', sep='|')
abr_2013 = pd.read_csv ('tabela_pessoal2013abr.csv', sep='|')
mai_2013 = pd.read_csv ('tabela_pessoal2013mai.csv', sep='|')
jun_2013 = pd.read_csv ('tabela_pessoal2013jun.csv', sep='|')
jul_2013 = pd.read_csv ('tabela_pessoal2013jul.csv', sep='|')
ago_2013 = pd.read_csv ('tabela_pessoal2013ago.csv', sep='|')
set_2013 = pd.read_csv ('tabela_pessoal2013set.csv', sep='|')
out_2013 = pd.read_csv ('tabela_pessoal2013out.csv', sep='|')
nov_2013 = pd.read_csv ('tabela_pessoal2013nov.csv', sep='|')
dez_2013 = pd.read_csv ('tabela_pessoal2013dez.csv', sep='|')
jan_2014 = pd.read_csv ('tabela_pessoal2014jan.csv', sep='|')
fev_2014= pd.read_csv ('tabela_pessoal2014fev.csv', sep='|')
mar_2014 = pd.read_csv ('tabela_pessoal2014mar.csv', sep='|')
abr_2014 = pd.read_csv ('tabela_pessoal2014abr.csv', sep='|')
mai_2014 = pd.read_csv ('tabela_pessoal2014mai.csv', sep='|')
jun_2014 = pd.read_csv ('tabela_pessoal2014jun.csv', sep='|')
jul_2014 = pd.read_csv ('tabela_pessoal2014jul.csv', sep='|')
ago_2014 = pd.read_csv ('tabela_pessoal2014ago.csv', sep='|')
set_2014 = pd.read_csv ('tabela_pessoal2014set.csv', sep='|')
out_2014 = pd.read_csv ('tabela_pessoal2014out.csv', sep='|')
nov_2014 = pd.read_csv ('tabela_pessoal2014nov.csv', sep='|')
dez_2014 = pd.read_csv ('tabela_pessoal2014dez.csv', sep='|')
jan_2015 = pd.read_csv ('tabela_pessoal2015jan.csv', sep='|')
fev_2015= pd.read_csv ('tabela_pessoal2015fev.csv', sep='|')
mar_2015 = pd.read_csv ('tabela_pessoal2015mar.csv', sep='|')
abr_2015 = pd.read_csv ('tabela_pessoal2015abr.csv', sep='|')
mai_2015 = pd.read_csv ('tabela_pessoal2015mai.csv', sep='|')
jun_2015 = pd.read_csv ('tabela_pessoal2015jun.csv', sep='|')
jul_2015 = pd.read_csv ('tabela_pessoal2015jul.csv', sep='|')
ago_2015 = pd.read_csv ('tabela_pessoal2015ago.csv', sep='|')
set_2015 = pd.read_csv ('tabela_pessoal2015set.csv', sep='|')
out_2015 = pd.read_csv ('tabela_pessoal2015out.csv', sep='|')
nov_2015 = pd.read_csv ('tabela_pessoal2015nov.csv', sep='|')
dez_2015 = pd.read_csv ('tabela_pessoal2015dez.csv', sep='|')
jan_2016 = pd.read_csv ('tabela_pessoal2016jan.csv', sep='|')
fev_2016= pd.read_csv ('tabela_pessoal2016fev.csv', sep='|')
mar_2016 = pd.read_csv ('tabela_pessoal2016mar.csv', sep='|')
abr_2016 = pd.read_csv ('tabela_pessoal2016abr.csv', sep='|')
mai_2016 = pd.read_csv ('tabela_pessoal2016mai.csv', sep='|')
jun_2016 = pd.read_csv ('tabela_pessoal2016jun.csv', sep='|')
jul_2016= pd.read_csv ('tabela_pessoal2016jul.csv', sep='|')
ago_2016 = pd.read_csv ('tabela_pessoal2016ago.csv', sep='|')
set_2016 = pd.read_csv ('tabela_pessoal2016set.csv', sep='|')
out_2016 = pd.read_csv ('tabela_pessoal2016out.csv', sep='|')
nov_2016 = pd.read_csv ('tabela_pessoal2016nov.csv', sep='|')
dez_2016 = pd.read_csv ('tabela_pessoal2016dez.csv', sep='|')
jan_2017= pd.read_csv ('tabela_pessoal2017jan.csv', sep='|')
fev_2017= pd.read_csv ('tabela_pessoal2017fev.csv', sep='|')
mar_2017 = pd.read_csv ('tabela_pessoal2017mar.csv', sep='|')
abr_2017= pd.read_csv ('tabela_pessoal2017abr.csv', sep='|')
mai_2017 = pd.read_csv ('tabela_pessoal2017mai.csv', sep='|')
jun_2017 = pd.read_csv ('tabela_pessoal2017jun.csv', sep='|')
jul_2017= pd.read_csv ('tabela_pessoal2017jul.csv', sep='|')
ago_2017 = pd.read_csv ('tabela_pessoal2017ago.csv', sep='|')
set_2017= pd.read_csv ('tabela_pessoal2017set.csv', sep='|')
out_2017= pd.read_csv ('tabela_pessoal2017out.csv', sep='|')
nov_2017 = pd.read_csv ('tabela_pessoal2017nov.csv', sep='|')
dez_2017 = pd.read_csv ('tabela_pessoal2017dez.csv', sep='|')
jan_2018= pd.read_csv ('tabela_pessoal2018jan.csv', sep='|')
fev_2018= pd.read_csv ('tabela_pessoal2018fev.csv', sep='|')
mar_2018 = pd.read_csv ('tabela_pessoal2018mar.csv', sep='|')
abr_2018= pd.read_csv ('tabela_pessoal2018abr.csv', sep='|')
mai_2018 = pd.read_csv ('tabela_pessoal2018mai.csv', sep='|')
jun_2018 = pd.read_csv ('tabela_pessoal2018jun.csv', sep='|')
jul_2018= pd.read_csv ('tabela_pessoal2018jul.csv', sep='|')
ago_2018 = pd.read_csv ('tabela_pessoal2018ago.csv', sep='|')
set_2018= pd.read_csv ('tabela_pessoal2018set.csv', sep='|')
out_2018= pd.read_csv ('tabela_pessoal2018out.csv', sep='|')
nov_2018 = pd.read_csv ('tabela_pessoal2018nov.csv', sep='|')
dez_2018= pd.read_csv ('tabela_pessoal2018dez.csv', sep='|')
ano_2009_total = jan_2009.append(fev_2009).append(mar_2009).append(abr_2009).append(mai_2009).append(jun_2009).append(jul_2009).append(ago_2009).append(set_2009).append(out_2009).append(nov_2009).append(dez_2009)
ano_2010_total = jan_2010.append(fev_2010).append(mar_2010).append(abr_2010).append(mai_2010).append(jun_2010).append(jul_2010).append(ago_2010).append(set_2010).append(out_2010).append(nov_2010).append(dez_2010)
ano_2011_total = jan_2011.append(fev_2011).append(mar_2011).append(abr_2011).append(mai_2011).append(jun_2011).append(jul_2011).append(ago_2011).append(set_2011).append(out_2011).append(nov_2011).append(dez_2011)
ano_2012_total = jan_2012.append(fev_2012).append(mar_2012).append(abr_2012).append(mai_2012).append(jun_2012).append(jul_2012).append(ago_2012).append(set_2012).append(out_2012).append(nov_2012).append(dez_2012)
ano_2013_total = jan_2013.append(fev_2013).append(mar_2013).append(abr_2013).append(mai_2013).append(jun_2013).append(jul_2013).append(ago_2013).append(set_2013).append(out_2013).append(nov_2013).append(dez_2013)
ano_2014_total = jan_2014.append(fev_2014).append(mar_2014).append(abr_2014).append(mai_2014).append(jun_2014).append(jul_2014).append(ago_2014).append(set_2014).append(out_2014).append(nov_2014).append(dez_2014)
ano_2015_total = jan_2015.append(fev_2015).append(mar_2015).append(abr_2015).append(mai_2015).append(jun_2015).append(jul_2015).append(ago_2015).append(set_2015).append(out_2015).append(nov_2015).append(dez_2015)
ano_2016_total = jan_2016.append(fev_2016).append(mar_2016).append(abr_2016).append(mai_2016).append(jun_2016).append(jul_2016).append(ago_2016).append(set_2016).append(out_2016).append(nov_2016).append(dez_2016)
ano_2017_total = jan_2017.append(fev_2017).append(mar_2017).append(abr_2017).append(mai_2017).append(jun_2017).append(jul_2017).append(ago_2017).append(set_2017).append(out_2017).append(nov_2017).append(dez_2017)
ano_2018_total = jan_2018.append(fev_2018).append(mar_2018).append(abr_2018).append(mai_2018).append(jun_2018).append(jul_2018).append(ago_2018).append(set_2018).append(out_2018).append(nov_2018).append(dez_2018)
# total = ano_2009_total.append(ano_2010_total).append(ano_2011_total).append(ano_2012_total).append(ano_2013_total).append(ano_2014_total).append(ano_2015_total).append(ano_2016_total).append(ano_2017_total).append(ano_2018_total)
ano_2018_total.nlargest(5,'valor_total')
ano_2009_total.loc[ano_2009_total['tipo_contratacao'] == 'Efetivo']
ano_2009_total['tipo_contratacao'].value_counts()
efetivo_2009 = ano_2009_total.loc[ano_2009_total['tipo_contratacao'] == 'Efetivo']
efetivo_2010 = ano_2010_total.loc[ano_2010_total['tipo_contratacao'] == 'Efetivo']
efetivo_2011 = ano_2011_total.loc[ano_2011_total['tipo_contratacao'] == 'Efetivo']
efetivo_2012 = ano_2012_total.loc[ano_2012_total['tipo_contratacao'] == 'Efetivo']
efetivo_2013 = ano_2013_total.loc[ano_2013_total['tipo_contratacao'] == 'Efetivo']
efetivo_2014 = ano_2014_total.loc[ano_2014_total['tipo_contratacao'] == 'Efetivo']
efetivo_2015 = ano_2015_total.loc[ano_2015_total['tipo_contratacao'] == 'Efetivo']
efetivo_2016 = ano_2016_total.loc[ano_2016_total['tipo_contratacao'] == 'Efetivo']
efetivo_2017 = ano_2017_total.loc[ano_2017_total['tipo_contratacao'] == 'Efetivo']
efetivo_2018 = ano_2018_total.loc[ano_2018_total['tipo_contratacao'] == 'Efetivo']
efetivo_2009_contra = ano_2009_total.loc[ano_2009_total['tipo_contratacao'] == 'Contratação por excepcional interesse público']
efetivo_2010_contra = ano_2010_total.loc[ano_2010_total['tipo_contratacao'] == 'Contratação por excepcional interesse público']
efetivo_2011_contra = ano_2011_total.loc[ano_2011_total['tipo_contratacao'] == 'Contratação por excepcional interesse público']
efetivo_2012_contra = ano_2012_total.loc[ano_2012_total['tipo_contratacao'] == 'Contratação por excepcional interesse público']
efetivo_2013_contra = ano_2013_total.loc[ano_2013_total['tipo_contratacao'] == 'Contratação por excepcional interesse público']
efetivo_2014_contra = ano_2014_total.loc[ano_2014_total['tipo_contratacao'] == 'Contratação por excepcional interesse público']
efetivo_2015_contra = ano_2015_total.loc[ano_2015_total['tipo_contratacao'] == 'Contratação por excepcional interesse público']
efetivo_2016_contra = ano_2016_total.loc[ano_2016_total['tipo_contratacao'] == 'Contratação por excepcional interesse público']
efetivo_2017_contra = ano_2017_total.loc[ano_2017_total['tipo_contratacao'] == 'Contratação por excepcional interesse público']
efetivo_2018_contra = ano_2018_total.loc[ano_2018_total['tipo_contratacao'] == 'Contratação por excepcional interesse público']
efetivo_2009_commi = ano_2009_total.loc[ano_2009_total['tipo_contratacao'] == 'Comissionado']
efetivo_2010_commi = ano_2010_total.loc[ano_2010_total['tipo_contratacao'] == 'Comissionado']
efetivo_2011_commi = ano_2011_total.loc[ano_2011_total['tipo_contratacao'] == 'Comissionado']
efetivo_2012_commi = ano_2012_total.loc[ano_2012_total['tipo_contratacao'] == 'Comissionado']
efetivo_2013_commi = ano_2013_total.loc[ano_2013_total['tipo_contratacao'] == 'Comissionado']
efetivo_2014_commi = ano_2014_total.loc[ano_2014_total['tipo_contratacao'] == 'Comissionado']
efetivo_2015_commi = ano_2015_total.loc[ano_2015_total['tipo_contratacao'] == 'Comissionado']
efetivo_2016_commi = ano_2016_total.loc[ano_2016_total['tipo_contratacao'] == 'Comissionado']
efetivo_2017_commi = ano_2017_total.loc[ano_2017_total['tipo_contratacao'] == 'Comissionado']
efetivo_2018_commi = ano_2018_total.loc[ano_2018_total['tipo_contratacao'] == 'Comissionado']
efetivo_2009_adispo = ano_2009_total.loc[ano_2009_total['tipo_contratacao'] == 'À disposição']
efetivo_2010_adispo = ano_2010_total.loc[ano_2010_total['tipo_contratacao'] == 'À disposição']
efetivo_2011_adispo = ano_2011_total.loc[ano_2011_total['tipo_contratacao'] == 'À disposição']
efetivo_2012_adispo = ano_2012_total.loc[ano_2012_total['tipo_contratacao'] == 'À disposição']
efetivo_2013_adispo = ano_2013_total.loc[ano_2013_total['tipo_contratacao'] == 'À disposição']
efetivo_2014_adispo = ano_2014_total.loc[ano_2014_total['tipo_contratacao'] == 'À disposição']
efetivo_2015_adispo = ano_2015_total.loc[ano_2015_total['tipo_contratacao'] == 'À disposição']
efetivo_2016_adispo = ano_2016_total.loc[ano_2016_total['tipo_contratacao'] == 'À disposição']
efetivo_2017_adispo = ano_2017_total.loc[ano_2017_total['tipo_contratacao'] == 'À disposição']
efetivo_2018_adispo = ano_2018_total.loc[ano_2018_total['tipo_contratacao'] == 'À disposição']
efetivo_2009_funco = ano_2009_total.loc[ano_2009_total['tipo_contratacao'] == 'Função de confiança']
efetivo_2010_funco = ano_2010_total.loc[ano_2010_total['tipo_contratacao'] == 'Função de confiança']
efetivo_2011_funco = ano_2011_total.loc[ano_2011_total['tipo_contratacao'] == 'Função de confiança']
efetivo_2012_funco = ano_2012_total.loc[ano_2012_total['tipo_contratacao'] == 'Função de confiança']
efetivo_2013_funco = ano_2013_total.loc[ano_2013_total['tipo_contratacao'] == 'Função de confiança']
efetivo_2014_funco = ano_2014_total.loc[ano_2014_total['tipo_contratacao'] == 'Função de confiança']
efetivo_2015_funco = ano_2015_total.loc[ano_2015_total['tipo_contratacao'] == 'Função de confiança']
efetivo_2016_funco = ano_2016_total.loc[ano_2016_total['tipo_contratacao'] == 'Função de confiança']
efetivo_2017_funco = ano_2017_total.loc[ano_2017_total['tipo_contratacao'] == 'Função de confiança']
efetivo_2018_funco = ano_2018_total.loc[ano_2018_total['tipo_contratacao'] == 'Função de confiança']
efetivo_2009_eleti = ano_2009_total.loc[ano_2009_total['tipo_contratacao'] == 'Eletivo']
efetivo_2010_eleti = ano_2010_total.loc[ano_2010_total['tipo_contratacao'] == 'Eletivo']
efetivo_2011_eleti = ano_2011_total.loc[ano_2011_total['tipo_contratacao'] == 'Eletivo']
efetivo_2012_eleti = ano_2012_total.loc[ano_2012_total['tipo_contratacao'] == 'Eletivo']
efetivo_2013_eleti = ano_2013_total.loc[ano_2013_total['tipo_contratacao'] == 'Eletivo']
efetivo_2014_eleti = ano_2014_total.loc[ano_2014_total['tipo_contratacao'] == 'Eletivo']
efetivo_2015_eleti = ano_2015_total.loc[ano_2015_total['tipo_contratacao'] == 'Eletivo']
efetivo_2016_eleti = ano_2016_total.loc[ano_2016_total['tipo_contratacao'] == 'Eletivo']
efetivo_2017_eleti = ano_2017_total.loc[ano_2017_total['tipo_contratacao'] == 'Eletivo']
efetivo_2018_eleti = ano_2018_total.loc[ano_2018_total['tipo_contratacao'] == 'Eletivo']
pensio_2009 = ano_2009_total.loc[ano_2009_total['tipo_contratacao'] == 'Inativos / Pensionistas']
pensio_2010 = ano_2010_total.loc[ano_2010_total['tipo_contratacao'] == 'Inativos / Pensionistas']
pensio_2011 = ano_2011_total.loc[ano_2011_total['tipo_contratacao'] == 'Inativos / Pensionistas']
pensio_2012 = ano_2012_total.loc[ano_2012_total['tipo_contratacao'] == 'Inativos / Pensionistas']
pensio_2013 = ano_2013_total.loc[ano_2013_total['tipo_contratacao'] == 'Inativos / Pensionistas']
pensio_2014 = ano_2014_total.loc[ano_2014_total['tipo_contratacao'] == 'Inativos / Pensionistas']
pensio_2015 = ano_2015_total.loc[ano_2015_total['tipo_contratacao'] == 'Inativos / Pensionistas']
pensio_2016 = ano_2016_total.loc[ano_2016_total['tipo_contratacao'] == 'Inativos / Pensionistas']
pensio_2017 = ano_2017_total.loc[ano_2017_total['tipo_contratacao'] == 'Inativos / Pensionistas']
pensio_2018 = ano_2018_total.loc[ano_2018_total['tipo_contratacao'] == 'Inativos / Pensionistas']
efetivo_2009_soma = efetivo_2009['valor_total'].sum()
efetivo_2010_soma = efetivo_2010['valor_total'].sum()
efetivo_2011_soma = efetivo_2011['valor_total'].sum()
efetivo_2012_soma = efetivo_2012['valor_total'].sum()
efetivo_2013_soma = efetivo_2013['valor_total'].sum()
efetivo_2014_soma = efetivo_2014['valor_total'].sum()
efetivo_2015_soma = efetivo_2015['valor_total'].sum()
efetivo_2016_soma = efetivo_2016['valor_total'].sum()
efetivo_2017_soma = efetivo_2017['valor_total'].sum()
efetivo_2018_soma = efetivo_2018['valor_total'].sum()
contra_2009_soma = efetivo_2009_contra['valor_total'].sum()
contra_2010_soma = efetivo_2010_contra['valor_total'].sum()
contra_2011_soma = efetivo_2011_contra['valor_total'].sum()
contra_2012_soma = efetivo_2012_contra['valor_total'].sum()
contra_2013_soma = efetivo_2013_contra['valor_total'].sum()
contra_2014_soma = efetivo_2014_contra['valor_total'].sum()
contra_2015_soma = efetivo_2015_contra['valor_total'].sum()
contra_2016_soma = efetivo_2016_contra['valor_total'].sum()
contra_2017_soma = efetivo_2017_contra['valor_total'].sum()
contra_2018_soma = efetivo_2018_contra['valor_total'].sum()
comi_2009_soma = efetivo_2009_commi['valor_total'].sum()
comi_2010_soma = efetivo_2010_commi['valor_total'].sum()
comi_2011_soma =efetivo_2011_commi['valor_total'].sum()
comi_2012_soma = efetivo_2012_commi['valor_total'].sum()
comi_2013_soma = efetivo_2013_commi['valor_total'].sum()
comi_2014_soma = efetivo_2014_commi['valor_total'].sum()
comi_2015_soma = efetivo_2015_commi['valor_total'].sum()
comi_2016_soma = efetivo_2016_commi['valor_total'].sum()
comi_2017_soma = efetivo_2017_commi['valor_total'].sum()
comi_2018_soma = efetivo_2018_commi['valor_total'].sum()
adispo_2009_soma = efetivo_2009_adispo['valor_total'].sum()
adispo_2010_soma = efetivo_2010_adispo['valor_total'].sum()
adispo_2011_soma =efetivo_2011_adispo['valor_total'].sum()
adispo_2012_soma = efetivo_2012_adispo['valor_total'].sum()
adispo_2013_soma = efetivo_2013_adispo['valor_total'].sum()
adispo_2014_soma = efetivo_2014_adispo['valor_total'].sum()
adispo_2015_soma = efetivo_2015_adispo['valor_total'].sum()
adispo_2016_soma = efetivo_2016_adispo['valor_total'].sum()
adispo_2017_soma = efetivo_2017_adispo['valor_total'].sum()
adispo_2018_soma = efetivo_2018_adispo['valor_total'].sum()
funco_2009_soma = efetivo_2009_funco['valor_total'].sum()
funco_2010_soma = efetivo_2010_funco['valor_total'].sum()
funco_2011_soma =efetivo_2011_funco['valor_total'].sum()
funco_2012_soma = efetivo_2012_funco['valor_total'].sum()
funco_2013_soma = efetivo_2013_funco['valor_total'].sum()
funco_2014_soma = efetivo_2014_funco['valor_total'].sum()
funco_2015_soma = efetivo_2015_funco['valor_total'].sum()
funco_2016_soma = efetivo_2016_funco['valor_total'].sum()
funco_2017_soma = efetivo_2017_funco['valor_total'].sum()
funco_2018_soma = efetivo_2018_funco['valor_total'].sum()
eleti_2009_soma = efetivo_2009_eleti['valor_total'].sum()
eleti_2010_soma = efetivo_2010_eleti['valor_total'].sum()
eleti_2011_soma =efetivo_2011_eleti['valor_total'].sum()
eleti_2012_soma = efetivo_2012_eleti['valor_total'].sum()
eleti_2013_soma = efetivo_2013_eleti['valor_total'].sum()
eleti_2014_soma = efetivo_2014_eleti['valor_total'].sum()
eleti_2015_soma = efetivo_2015_eleti['valor_total'].sum()
eleti_2016_soma = efetivo_2016_eleti['valor_total'].sum()
eleti_2017_soma = efetivo_2017_eleti['valor_total'].sum()
eleti_2018_soma = efetivo_2018_eleti['valor_total'].sum()
pensio_2009_total = pensio_2009['valor_total'].sum()
pensio_2010_total = pensio_2010['valor_total'].sum()
pensio_2011_total = pensio_2011['valor_total'].sum()
pensio_2012_total = pensio_2012['valor_total'].sum()
pensio_2013_total = pensio_2013['valor_total'].sum()
pensio_2014_total = pensio_2014['valor_total'].sum()
pensio_2015_total= pensio_2015['valor_total'].sum()
pensio_2016_total = pensio_2016['valor_total'].sum()
pensio_2017_total = pensio_2017['valor_total'].sum()
pensio_2018_total = pensio_2018['valor_total'].sum()
efetivos = [efetivo_2009_soma, efetivo_2010_soma, efetivo_2011_soma,efetivo_2012_soma,efetivo_2013_soma,efetivo_2014_soma,efetivo_2015_soma,efetivo_2016_soma,efetivo_2017_soma, efetivo_2018_soma]
contra = [contra_2009_soma, contra_2010_soma, contra_2011_soma,contra_2012_soma,contra_2013_soma,contra_2014_soma,contra_2015_soma,contra_2016_soma,contra_2017_soma, contra_2018_soma]
comissionado = [comi_2009_soma, comi_2010_soma, comi_2011_soma,comi_2012_soma,comi_2013_soma,comi_2014_soma,comi_2015_soma,comi_2016_soma,comi_2017_soma, comi_2018_soma]
adispo = [adispo_2009_soma, adispo_2010_soma, adispo_2011_soma,adispo_2012_soma,adispo_2013_soma,adispo_2014_soma,adispo_2015_soma,adispo_2016_soma,adispo_2017_soma, adispo_2018_soma]
funco = [funco_2009_soma, funco_2010_soma, funco_2011_soma,funco_2012_soma,funco_2013_soma,funco_2014_soma,funco_2015_soma,funco_2016_soma,funco_2017_soma, funco_2018_soma]
eleti = [eleti_2009_soma, eleti_2010_soma, eleti_2011_soma,eleti_2012_soma,eleti_2013_soma,eleti_2014_soma,eleti_2015_soma,eleti_2016_soma,eleti_2017_soma, eleti_2018_soma]
pensio = [pensio_2009_total, pensio_2010_total, pensio_2011_total, pensio_2012_total, pensio_2013_total, pensio_2014_total, pensio_2015_total, pensio_2016_total, pensio_2017_total, pensio_2018_total]
###Output
_____no_output_____
###Markdown
ALGUNS GRÁFICOS
###Code
plt.figure(1, figsize=(15, 9))
plt.plot (anos, efetivos, label='efetivo')
plt.plot (anos, contra, label='CEIP')
plt.plot (anos, comissionado, label='Cargos Comissionados')
plt.plot (anos, adispo, label='Cargos a disposição')
plt.plot (anos, funco, label='Cargos de confiança')
plt.plot (anos, eleti, label='Cargos eletivos')
plt.ylabel('bilhões')
plt.title('despesas por tipos de contrato')
plt.grid(True)
plt.legend()
plt.savefig('despesaporcontrato.png', transparent=True)
plt.show()
contribuicoes = ['Efetivo', 'Contribuições', 'Comissionado', 'À Disposição', 'Confiança', 'Eletivo']
plt.figure(2, figsize=(25, 8))
plt.subplot(231)
plt.plot(anos, efetivos)
plt.title('Efetivos')
plt.grid(True)
plt.subplot(232)
plt.plot(anos, contra)
plt.title('Contribuições')
plt.grid(True)
plt.subplot(233)
plt.plot(anos, comissionado)
plt.title('Cargos comissionados')
plt.grid(True)
plt.subplot(234)
plt.plot(anos, adispo)
plt.title('Cargos à disposição')
plt.grid(True)
plt.subplot(235)
plt.plot(anos, funco)
plt.title('Cargos de confiança')
plt.grid(True)
plt.subplot(236)
plt.plot(anos, eleti)
plt.title('Cargos eletivos')
plt.grid(True)
plt.savefig('quadrocontrato', transparent=True)
plt.suptitle('Despesas por tipo de contrato')
plt.show
plt.figure(1, figsize=(15, 9))
plt.plot(anos, pensio)
plt.title('Pensionistas e Inativos')
plt.grid(True)
plt.savefig('pensionista_valor.png', transparent=True)
plt.show()
grafico = ano_2018_total['tipo_contratacao'].value_counts()
grafico
label = ['CEIP', 'Efetivo', 'I&P', 'Comissionado', 'disposição', ' eletivo', 'confiança']
plt.figure(1, figsize=(13, 13))
plt.pie(grafico, autopct='%1.1f%%', labels=label)
plt.title('Quantidade de funcionários por tipo de contrato - 2018')
plt.grid(True)
plt.savefig('pizza_funcionario.png', transparent=True)
plt.show()
grafico_2 = ano_2017_total['tipo_contratacao'].value_counts()
grafico_3 = ano_2016_total['tipo_contratacao'].value_counts()
cinco_mais = ano_2018_total.nlargest(5, 'valor_total')
grafico_osmais_2 = cinco_mais['valor_total']
nomes = cinco_mais['nome']
plt.figure(1, figsize=(15, 8))
plt.bar (nomes, grafico_osmais_2, width = 0.6)
plt.xlabel('funcionários')
plt.ylabel('milhares de R$')
plt.title('Maiores salários em 2018')
plt.savefig('maioressalarios.png', transparent=True)
plt.show()
ano_2018_total['cargo'].value_counts()
professor_2009 = ano_2009_total.loc[ano_2009_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA I']
professor_2010 = ano_2010_total.loc[ano_2010_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA I']
professor_2011 = ano_2011_total.loc[ano_2011_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA I']
professor_2012 = ano_2012_total.loc[ano_2012_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA I']
professor_2013 = ano_2013_total.loc[ano_2013_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA I']
professor_2014 = ano_2014_total.loc[ano_2014_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA I']
professor_2015 = ano_2015_total.loc[ano_2015_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA I']
professor_2016 = ano_2016_total.loc[ano_2016_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA I']
professor_2017 = ano_2017_total.loc[ano_2017_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA I']
professor_2018 = ano_2018_total.loc[ano_2018_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA I']
prof_mais = professor_2018.nlargest(5, 'valor_total')
professores = prof_mais['nome']
prof_mais_5 = prof_mais['valor_total']
plt.figure(1, figsize=(15, 9))
plt.bar (professores, prof_mais_5, width = 0.6)
plt.xlabel('professores')
plt.ylabel('milhares de R$')
plt.title('5 maiores salários para professores da educação básica em 2018')
plt.savefig('maiorsalarioprofesso.png', transparent=True)
plt.show()
prof_mais.head()
prof_maior = ano_2018_total.loc[ano_2018_total['nome'] == 'MARCELA BANDEIRA DE MELLO ALMEIDA'] #maior salario para professor basico 1
prof_maior.head()
prof_maior.describe() # estatísticas do maior salário de professor em 2018
professor_2018.describe()
soma_2009pro = professor_2009['valor_total'].sum()
soma_2010pro = professor_2010['valor_total'].sum()
soma_2011pro = professor_2011['valor_total'].sum()
soma_2012pro = professor_2012['valor_total'].sum()
soma_2013pro = professor_2013['valor_total'].sum()
soma_2014pro = professor_2014['valor_total'].sum()
soma_2015pro = professor_2015['valor_total'].sum()
soma_2016pro = professor_2016['valor_total'].sum()
soma_2017pro = professor_2017['valor_total'].sum()
soma_2018pro = professor_2018['valor_total'].sum()
grafico_prof = [soma_2009pro, soma_2010pro, soma_2011pro, soma_2012pro, soma_2013pro, soma_2014pro, soma_2015pro, soma_2016pro, soma_2017pro, soma_2018pro]
plt.figure(1, figsize=(15, 9))
plt.plot (anos, grafico_prof, label='professores')
plt.title('Despesa com professores da educação básica I')
plt.grid(True)
plt.legend()
plt.savefig('despesaprofessor1.png', transparent=True)
plt.show()
professor_2009basic2= ano_2009_total.loc[ano_2009_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA II']
professor_2010basic2 = ano_2010_total.loc[ano_2010_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA II']
professor_2011basic2 = ano_2011_total.loc[ano_2011_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA II']
professor_2012basic2 = ano_2012_total.loc[ano_2012_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA II']
professor_2013basic2 = ano_2013_total.loc[ano_2013_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA II']
professor_2014basic2 = ano_2014_total.loc[ano_2014_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA II']
professor_2015basic2 = ano_2015_total.loc[ano_2015_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA II']
professor_2016basic2 = ano_2016_total.loc[ano_2016_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA II']
professor_2017basic2 = ano_2017_total.loc[ano_2017_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA II']
professor_2018basic2 = ano_2018_total.loc[ano_2018_total['cargo'] == 'PROFESSOR DA EDUCACAO BASICA II']
soma_2009prof2 = professor_2009basic2['valor_total'].sum()
soma_2010prof2 = professor_2010basic2['valor_total'].sum()
soma_2011prof2 = professor_2011basic2['valor_total'].sum()
soma_2012prof2 = professor_2012basic2['valor_total'].sum()
soma_2013prof2 = professor_2013basic2['valor_total'].sum()
soma_2014prof2 = professor_2014basic2['valor_total'].sum()
soma_2015prof2 = professor_2015basic2['valor_total'].sum()
soma_2016prof2 = professor_2016basic2['valor_total'].sum()
soma_2017prof2 = professor_2017basic2['valor_total'].sum()
soma_2018prof2 = professor_2018basic2['valor_total'].sum()
professor_basic_2 = [soma_2009prof2, soma_2010prof2, soma_2011prof2, soma_2012prof2, soma_2013prof2, soma_2014prof2, soma_2015prof2, soma_2016prof2, soma_2017prof2, soma_2018prof2]
plt.figure(1, figsize=(15, 9))
plt.plot (anos, professor_basic_2, label='professores')
plt.title('Despesa com professores da educação básica II')
plt.grid(True)
plt.legend()
plt.savefig('despesaprofessor2.png', transparent=True)
plt.show()
plt.figure(1, figsize=(15, 9))
plt.plot (anos, grafico_prof, label='Básico I')
plt.plot (anos, professor_basic_2, label='Básico II')
plt.title('Despesa com professores da educação básica I e II')
plt.grid(True)
plt.legend()
plt.savefig('professor1e2.png', transparent=True)
plt.show()
professor_2018basic2.nlargest(5, 'valor_total')
maior_salario = ano_2018_total.loc[ano_2018_total['nome'] == 'MARIA DO SOCORRO O DE S SILVA'] # maior salário de professor básico II em 2018
maior_salario.describe()
professor_2018basic2.describe()
professor_2017basic2.describe()
professor_2016basic2.describe()
total_fun_2009 = ano_2009_total['valor_total'].sum()
total_fun_2010 = ano_2010_total['valor_total'].sum()
total_fun_2011 = ano_2011_total['valor_total'].sum()
total_fun_2012 = ano_2012_total['valor_total'].sum()
total_fun_2013 = ano_2013_total['valor_total'].sum()
total_fun_2014 = ano_2014_total['valor_total'].sum()
total_fun_2015 = ano_2015_total['valor_total'].sum()
total_fun_2016 = ano_2016_total['valor_total'].sum()
total_fun_2017 = ano_2017_total['valor_total'].sum()
total_fun_2018 = ano_2018_total['valor_total'].sum()
funcionario = [total_fun_2009, total_fun_2010, total_fun_2011, total_fun_2012, total_fun_2013, total_fun_2014, total_fun_2015, total_fun_2016, total_fun_2017, total_fun_2018]
plt.figure(1, figsize=(15, 9))
plt.plot (anos, funcionario)
plt.ylabel('bilhões')
plt.title('Despesa com funcionários')
plt.grid(True)
plt.savefig('funcionario_valor.png', transparent=True)
plt.show()
plt.figure(1, figsize=(15, 9))
plt.bar (anos, grafico_despesa, label='Despesas')
plt.bar (anos, funcionario, label='Despesas com funcionários')
plt.ylabel('bilhões')
plt.title('Despesas gerais da prefeitura')
plt.grid(True)
plt.legend()
plt.savefig('despesafuncionario.png', transparent=True)
plt.show()
ano_2009_total['conta'] = 1
ano_2010_total['conta'] = 1
ano_2011_total['conta'] = 1
ano_2012_total['conta'] = 1
ano_2013_total['conta'] = 1
ano_2014_total['conta'] = 1
ano_2015_total['conta'] = 1
ano_2016_total['conta'] = 1
ano_2017_total['conta'] = 1
ano_2018_total['conta'] = 1
soma_2009fun = ano_2009_total['conta'].sum()
soma_2010fun = ano_2010_total['conta'].sum()
soma_2011fun = ano_2011_total['conta'].sum()
soma_2012fun = ano_2012_total['conta'].sum()
soma_2013fun = ano_2013_total['conta'].sum()
soma_2014fun = ano_2014_total['conta'].sum()
soma_2015fun = ano_2015_total['conta'].sum()
soma_2016fun = ano_2016_total['conta'].sum()
soma_2017fun = ano_2017_total['conta'].sum()
soma_2018fun = ano_2018_total['conta'].sum()
grafico_fun = [soma_2009fun, soma_2010fun, soma_2011fun, soma_2012fun, soma_2013fun, soma_2014fun, soma_2015fun, soma_2016fun, soma_2017fun, soma_2018fun]
plt.figure(1, figsize=(15, 9))
plt.plot (anos, grafico_fun, label='funcionários')
plt.title('quantidade de funcionários')
plt.grid(True)
plt.legend()
plt.savefig('quantidadefuncionarios.png', transparent=True)
plt.show()
grafico_prof2 = [soma_2009prof2, soma_2010prof2, soma_2011prof2, soma_2012prof2, soma_2013prof2, soma_2014prof2, soma_2015prof2, soma_2016prof2, soma_2017prof2, soma_2018prof2]
efetivo_2009_total = efetivo_2009['conta'].sum()
efetivo_2010_total = efetivo_2010['conta'].sum()
efetivo_2011_total = efetivo_2011['conta'].sum()
efetivo_2012_total = efetivo_2012['conta'].sum()
efetivo_2013_total = efetivo_2013['conta'].sum()
efetivo_2014_total = efetivo_2014['conta'].sum()
efetivo_2015_total= efetivo_2015['conta'].sum()
efetivo_2016_total = efetivo_2016['conta'].sum()
efetivo_2017_total = efetivo_2017['conta'].sum()
efetivo_2018_total = efetivo_2018['conta'].sum()
contra_2009_total = efetivo_2009_contra['conta'].sum()
contra_2010_total = efetivo_2010_contra['conta'].sum()
contra_2011_total = efetivo_2011_contra['conta'].sum()
contra_2012_total = efetivo_2012_contra['conta'].sum()
contra_2013_total = efetivo_2013_contra['conta'].sum()
contra_2014_total = efetivo_2014_contra['conta'].sum()
contra_2015_total = efetivo_2015_contra['conta'].sum()
contra_2016_total = efetivo_2016_contra['conta'].sum()
contra_2017_total = efetivo_2017_contra['conta'].sum()
contra_2018_total = efetivo_2018_contra['conta'].sum()
commi_2009_total = efetivo_2009_commi['conta'].sum()
commi_2010_total = efetivo_2010_commi['conta'].sum()
commi_2011_total = efetivo_2011_commi['conta'].sum()
commi_2012_total = efetivo_2012_commi['conta'].sum()
commi_2013_total = efetivo_2013_commi['conta'].sum()
commi_2014_total = efetivo_2014_commi['conta'].sum()
commi_2015_total = efetivo_2015_commi['conta'].sum()
commi_2016_total = efetivo_2016_commi['conta'].sum()
commi_2017_total = efetivo_2017_commi['conta'].sum()
commi_2018_total = efetivo_2018_commi['conta'].sum()
adispo_2009_total = efetivo_2009_adispo['conta'].sum()
adispo_2010_total = efetivo_2010_adispo['conta'].sum()
adispo_2011_total = efetivo_2011_adispo['conta'].sum()
adispo_2012_total = efetivo_2012_adispo['conta'].sum()
adispo_2013_total = efetivo_2013_adispo['conta'].sum()
adispo_2014_total = efetivo_2014_adispo['conta'].sum()
adispo_2015_total = efetivo_2015_adispo['conta'].sum()
adispo_2016_total = efetivo_2016_adispo['conta'].sum()
adispo_2017_total = efetivo_2017_adispo['conta'].sum()
adispo_2018_total = efetivo_2018_adispo['conta'].sum()
funco_2009_total = efetivo_2009_funco['conta'].sum()
funco_2010_total = efetivo_2010_funco['conta'].sum()
funco_2011_total = efetivo_2011_funco['conta'].sum()
funco_2012_total = efetivo_2012_funco['conta'].sum()
funco_2013_total = efetivo_2013_funco['conta'].sum()
funco_2014_total = efetivo_2014_funco['conta'].sum()
funco_2015_total = efetivo_2015_funco['conta'].sum()
funco_2016_total = efetivo_2016_funco['conta'].sum()
funco_2017_total = efetivo_2017_funco['conta'].sum()
funco_2018_total = efetivo_2018_funco['conta'].sum()
eleti_2009_total = efetivo_2009_eleti['conta'].sum()
eleti_2010_total = efetivo_2010_eleti['conta'].sum()
eleti_2011_total = efetivo_2011_eleti['conta'].sum()
eleti_2012_total = efetivo_2012_eleti['conta'].sum()
eleti_2013_total = efetivo_2013_eleti['conta'].sum()
eleti_2014_total = efetivo_2014_eleti['conta'].sum()
eleti_2015_total = efetivo_2015_eleti['conta'].sum()
eleti_2016_total = efetivo_2016_eleti['conta'].sum()
eleti_2017_total = efetivo_2017_eleti['conta'].sum()
eleti_2018_total = efetivo_2018_eleti['conta'].sum()
pen_2009 = pensio_2009['conta'].sum()
pen_2010 = pensio_2010['conta'].sum()
pen_2011 = pensio_2011['conta'].sum()
pen_2012 = pensio_2012['conta'].sum()
pen_2013 = pensio_2013['conta'].sum()
pen_2014 = pensio_2014['conta'].sum()
pen_2015 = pensio_2015['conta'].sum()
pen_2016 = pensio_2016['conta'].sum()
pen_2017 = pensio_2017['conta'].sum()
pen_2018 = pensio_2018['conta'].sum()
efeti = [efetivo_2009_total, efetivo_2010_total, efetivo_2011_total, efetivo_2012_total, efetivo_2013_total, efetivo_2014_total, efetivo_2015_total, efetivo_2016_total, efetivo_2017_total, efetivo_2018_total]
contri = [contra_2009_total, contra_2010_total, contra_2011_total, contra_2012_total, contra_2013_total, contra_2014_total, contra_2015_total, contra_2016_total, contra_2017_total, contra_2018_total]
commi = [commi_2009_total, commi_2010_total, commi_2011_total, commi_2012_total, commi_2013_total, commi_2014_total, commi_2015_total, commi_2016_total, commi_2017_total, commi_2018_total]
adispor = [adispo_2009_total, adispo_2010_total, adispo_2011_total, adispo_2012_total, adispo_2013_total, adispo_2014_total, adispo_2015_total, adispo_2016_total, adispo_2017_total, adispo_2018_total]
confi = [funco_2009_total, funco_2010_total, funco_2011_total, funco_2012_total, funco_2013_total, funco_2014_total, funco_2015_total, funco_2016_total, funco_2017_total, funco_2018_total]
eletivo= [eleti_2009_total, eleti_2010_total, eleti_2011_total, eleti_2012_total, eleti_2013_total, eleti_2014_total, eleti_2015_total, eleti_2016_total, eleti_2017_total, eleti_2018_total]
pensionistas = [pen_2009, pen_2010, pen_2011, pen_2012, pen_2013, pen_2014, pen_2015, pen_2016, pen_2017, pen_2018]
plt.figure(1, figsize=(15, 9))
plt.bar (anos, efeti, label='efetivos')
plt.bar (anos, contri, label='CEIP')
plt.bar (anos, commi, label='comissionado')
plt.bar (anos, adispor, label='à disposição')
plt.bar (anos, confi, label='COnfiança')
plt.bar (anos, eletivo, label='Eletivo')
plt.bar (anos, pensionistas, label='Pensionistas')
plt.ylabel('milhares')
plt.title('total de funcionarios por tipo de contrato')
plt.grid(True)
plt.legend()
plt.savefig('barratotalfuncionario.png', transparent=True)
plt.show()
plt.figure(1, figsize=(17, 10))
plt.subplot(231)
plt.plot(anos, efeti)
plt.title('efetivos')
plt.subplot(232)
plt.plot(anos, contri)
plt.title('CEIP')
plt.subplot(233)
plt.plot(anos, commi)
plt.title('comissionados')
plt.subplot(234)
plt.plot(anos, adispor)
plt.title('à disposição')
plt.subplot(235)
plt.plot(anos, confi)
plt.title('Cargos de confiança')
plt.subplot(236)
plt.plot(anos, eletivo)
plt.title('Eletivos')
plt.savefig('quadrotiposcontrato.png', transparent=True)
plt.suptitle('Quantidade de funcionários públicos por tipo de contrato')
ind = ['CEIP', 'Efetivo', 'I&P', 'Comissionado', 'disposição', ' eletivo', 'confiança']
plt.figure(1, figsize=(15, 9))
plt.subplot(231)
plt.bar(ind, grafico)
plt.title('2018')
plt.grid(True)
plt.subplot(232)
plt.bar(ind, grafico_2)
plt.title('2017')
plt.grid(True)
plt.subplot(233)
plt.bar(ind, grafico_3)
plt.title('2016')
plt.grid(True)
plt.subplot(234)
plt.pie(grafico, autopct='%1.1f%%')
plt.subplot(235)
plt.pie(grafico_2, autopct='%1.1f%%')
plt.subplot(236)
plt.pie(grafico_3, autopct='%1.1f%%')
plt.savefig('quadrotiposcontrato2.png', transparent=True)
plt.suptitle('Quantidade de funcionários públicos por tipo de contrato')
funcionario
###Output
_____no_output_____ |
model_selection_2.ipynb | ###Markdown
Model Selection and Evaluation - Part 2 Activity 3 - Cross ValidationIn the previous activity, we made the classic mistake of using our validation data to predict our generalization error. This tends to give misleadingly optimistic predictions about how well we will do on unobserved data. Remember that we carefully picked our hyperparameter values to do as well as possible *on our held-out data*. We shouldn't be surprised when our model performs better on that data than on unobserved data. This problem is particularly acute if our data set is small.The traditional solution is to divide our data into three disjoint sets: **training**, **validation**, and **testing**:* The **training** set is used to fit the model.* The **validation** set is used to evaluate models for the purpose of hyperparameter selection. * The **test** set is kept in a locked room guarded by jaguars. We only look at the testing set ONCE, when we have finalized our model. That way our performance on the test set gives us an unbiased estimate of our generalization error.This traditional approach is fine if we have a lot of data to work with. If the data set is small, we are faced with a painful dilemma: More validation data means better model selection. More testing data means more accurate model evaluation. More training data means better models. Any data we use for one purpose can't be used for the others.**Cross validation** is one way to use limited data more effectively. The cells below walk us through an example of using cross validation for hyperparameter tuning.
###Code
# We need to reimport and reload everything...
%matplotlib qt
import numpy as np
import matplotlib.pyplot as plt
import datasource
from sklearn.tree import DecisionTreeRegressor
# Grab our training data
source = datasource.DataSource()
X, y = source.gen_data(100, seed=100)
# Split our data into a training and testing set...
split_point = int(X.shape[0] * .8) # Use 80% of the data to train the model
X_train = X[0:split_point, :]
y_train = y[0:split_point]
X_test = X[split_point::, :] # This data will ONLY be used for final evaluation.
y_test = y[split_point::]
###Output
_____no_output_____
###Markdown
The following cell shows how we can use the scikit-learn `KFold` class to automatically split up our training data for k-fold cross validation. Take a minute to read through this code to make sure you understand what's going on.
###Code
from sklearn.model_selection import KFold
folds = 10
max_max_leaves = 80
kf = KFold(n_splits=folds)
mses = np.zeros((folds, max_max_leaves - 2)) # (can't have 0 or 1 leaves)
# Loop over all of the hyperparameter settings
for max_leaves in range(2, max_max_leaves):
k = 0
# Evaluate each one K-times
for train_index, val_index in kf.split(X_train):
X_tr, X_val = X_train[train_index], X_train[val_index]
y_tr, y_val = y_train[train_index], y_train[val_index]
tree = DecisionTreeRegressor(max_leaf_nodes=max_leaves)
tree.fit(X_tr, y_tr)
y_val_predict = tree.predict(X_val)
mses[k, max_leaves - 2] = np.sum((y_val - y_val_predict)**2) / y_val.size
k += 1
# Average across the k folds
mse_avg = np.mean(mses, axis=0)
plt.plot(np.arange(2, max_max_leaves), mse_avg)
plt.xlabel('max leaves')
plt.ylabel('MSE')
plt.show()
###Output
_____no_output_____
###Markdown
If we are real experts in scikit-learn, we can automate some of this by using the `cross_val_score` function: (There are also library routines for [automating the entire process of hyperparameter tuning](https://scikit-learn.org/stable/modules/grid_search.html).)
###Code
from sklearn.model_selection import cross_val_score
mses = np.zeros((folds,max_max_leaves - 2))
# Loop over all of the hyperparameter settings
for size in range(2, max_max_leaves):
tree = DecisionTreeRegressor(max_leaf_nodes=size)
# Returns an array of cross validation results.
mses[:, size - 2] = -cross_val_score(tree, X_train, y_train,
cv=folds, scoring='neg_mean_squared_error')
mse_avg = np.mean(mses, axis=0)
plt.plot(np.arange(2, 80), mse_avg)
plt.show()
plt.xlabel('max leaves')
plt.ylabel('MSE')
###Output
_____no_output_____
###Markdown
Question* Based on the results above, what is the most promising hyperparameter value? Answer* Now that we have a value for our hyperparameter, let's train our final model on the *full* training set and use our locked-away testing set to predict model performance.
###Code
tree = DecisionTreeRegressor(max_leaf_nodes=????) # Put your best hyperparameter here!
# Train using ALL the training data
tree.fit(X_train, y_train)
# Test on held-out testing data
y_test_predict = tree.predict(X_test)
mse = np.sum((y_test - y_test_predict)**2) / y_test.size
print("Predicted MSE: {:.4f}".format(mse))
###Output
_____no_output_____
###Markdown
Since none of the data we are testing on here was used *in any way* to design or fit the model, this value should give us an unbiased estimate of our generalization error. Let's try testing on some new unobserved data to see how good our estimate is:
###Code
# Let's see how we do on unobserved data...
X_new, y_new = source.gen_data(5000, seed=200)
y_new_predict = tree.predict(X_new)
mse = np.sum((y_new - y_new_predict)**2) / y_new.size
print("MSE: {:.4f}".format(mse))
###Output
_____no_output_____ |
first_figure/generate_figure.ipynb | ###Markdown
Computations
###Code
# First figure: deep ensemble model
p_ensemble_before = Path('log/ensemble')
p_ensemble_after = Path('log/last_ensemble')
# Load models
net = model.MLP(dropout_rate=0.0)
nets = []
for modelname in [e for e in os.listdir(p_ensemble_before / 'models') if e[-2:] == 'pt']:
mynet = deepcopy(net)
mynet.load_state_dict(torch.load(Path(p_ensemble_before / 'models') / modelname))
nets.append(mynet)
# Inference
out = []
x = torch.linspace(-.5, .5, 200).view(-1, 1)
for net in nets:
net.eval() # To keep the dropout
with torch.no_grad():
out.append(net(x).view(-1))
res = torch.stack(out, 0)
m_ensemble_before, s_ensemble_before = res.mean(0).numpy(), res.std(0).numpy()
# Load models
net = model.MLP(dropout_rate=0.0)
nets = []
for modelname in [e for e in os.listdir(p_ensemble_after / 'models') if e[-2:] == 'pt']:
mynet = deepcopy(net)
mynet.load_state_dict(torch.load(Path(p_ensemble_after / 'models') / modelname))
nets.append(mynet)
# Inference
out = []
x = torch.linspace(-.5, .5, 200).view(-1, 1)
for net in nets:
net.eval() # To keep the dropout
with torch.no_grad():
out.append(net(x).view(-1))
res = torch.stack(out, 0)
m_ensemble_after, s_ensemble_after = res.mean(0).numpy(), res.std(0).numpy()
# Computations for DENN
# First figure: deep ensemble model
p_denn_before = Path('log/repulsive')
p_denn_after = Path('log/last_repulsive')
# Load models
net = model.MLP(dropout_rate=0.0)
nets = []
for modelname in [e for e in os.listdir(p_denn_before / 'models') if e[-2:] == 'pt']:
mynet = deepcopy(net)
mynet.load_state_dict(torch.load(Path(p_denn_before / 'models') / modelname))
nets.append(mynet)
# Inference
out = []
x = torch.linspace(-.5, .5, 200).view(-1, 1)
for net in nets:
net.eval() # To keep the dropout
with torch.no_grad():
out.append(net(x).view(-1))
res = torch.stack(out, 0)
m_denn_before, s_denn_before = res.mean(0).numpy(), res.std(0).numpy()
# Load models
net = model.MLP(dropout_rate=0.0)
nets = []
for modelname in [e for e in os.listdir(p_denn_after / 'models') if e[-2:] == 'pt']:
mynet = deepcopy(net)
mynet.load_state_dict(torch.load(Path(p_denn_after / 'models') / modelname))
nets.append(mynet)
# Inference
out = []
x = torch.linspace(-.5, .5, 200).view(-1, 1)
for net in nets:
net.eval() # To keep the dropout
with torch.no_grad():
out.append(net(x).view(-1))
res = torch.stack(out, 0)
m_denn_after, s_denn_after = res.mean(0).numpy(), res.std(0).numpy()
fig, axes = plt.subplots(1, 2, figsize=(8, 4), squeeze=False, sharex=True, sharey=True)
ax = axes[0, 0]
ax.plot(x_gt, y_gt, 'k--', label='True signal')
ax.fill_between(x.numpy().reshape(-1), m_ensemble_before - s_ensemble_before, m_ensemble_before + s_ensemble_before, color='b', alpha=.3, label='Uncertainty before black swan')
ax.fill_between(x.numpy().reshape(-1), m_ensemble_after - s_ensemble_after, m_ensemble_after + s_ensemble_after, facecolor='None', edgecolor='red', alpha=.3, label='Uncertainty after black swan', hatch="\\\\\\")
ax.plot(x.numpy().reshape(-1), m_ensemble_before - s_ensemble_before, color='b')
ax.plot(x.numpy().reshape(-1), m_ensemble_before + s_ensemble_before, color='b')
ax.plot(x.numpy().reshape(-1), m_ensemble_after - s_ensemble_after, color='orangered', linestyle='dotted')
ax.plot(x.numpy().reshape(-1), m_ensemble_after + s_ensemble_after, color='orangered', linestyle='dotted')
#ax.plot(x.numpy(), res[0, :].numpy(), c='m', label='Sample function')
ax.scatter(x_train, y_train, marker='+', c='k', s=200, label='Data')
blackswan = ax.scatter(0.25, f(0.25), marker='x', c='red', s=200, linewidth=3, label='Black swan event')
ax.axis([-.55, .55, -.6, 1.15])
fig.legend(prop={'size':15}, loc=(.13, .64), ncol=2)
ax.set_title('Deep ensemble')
ax = axes[0, 1]
ax.plot(x_gt, y_gt, 'k--', label='True signal')
ax.fill_between(x.numpy().reshape(-1), m_denn_before - s_denn_before, m_denn_before + s_denn_before, color='b', alpha=.3, label='Uncertainty before black swan')
ax.fill_between(x.numpy().reshape(-1), m_denn_after - s_denn_after, m_denn_after + s_denn_after, facecolor='None', edgecolor='red', alpha=.3, label='Uncertainty after black swan', hatch="\\\\\\")
ax.plot(x.numpy().reshape(-1), m_denn_before - s_denn_before, color='b')
ax.plot(x.numpy().reshape(-1), m_denn_before + s_denn_before, color='b')
ax.plot(x.numpy().reshape(-1), m_denn_after - s_denn_after, color='orangered', linestyle='dotted')
ax.plot(x.numpy().reshape(-1), m_denn_after + s_denn_after, color='orangered', linestyle='dotted')
#ax.plot(x.numpy(), res[0, :].numpy(), c='m', label='Sample function')
ax.scatter(x_train, y_train, marker='+', c='k', s=200, label='Data')
blackswan = ax.scatter(0.25, f(0.25), marker='x', c='red', s=200, linewidth=3, label='Black swan event')
ax.axis([-.55, .55, -.6, 1.15])
ax.set_title('DENN')
plt.tight_layout()
plt.subplots_adjust(hspace=.1, wspace=.05)
###Output
_____no_output_____
###Markdown
filename = 'illustration-objective-2'path_figs = Path('img')if not Path.exists(path_figs): os.makedirs(path_figs)path_savefig = path_figs / '{}.pdf'.format(filename)fig.savefig(path_savefig)
###Code
10**(-.5)
###Output
_____no_output_____ |
jupyter/annotation/english/explain-document-dl/Explain Document DL.ipynb | ###Markdown
Explain Documents with Deep Learning This notebook shows some of the available annotators in sparknlp. We start by importing required modules.
###Code
import sparknlp
spark = sparknlp.start()
print("Spark NLP version: ", sparknlp.version())
print("Apache Spark version: ", spark.version)
from sparknlp.pretrained import PretrainedPipeline
from sparknlp.base import *
###Output
_____no_output_____
###Markdown
Now, we load a pipeline model which contains the following annotators:Tokenizer, Deep Sentence Detector, Lemmatizer, Stemmer, Part of Speech (POS) and Context Spell Checker
###Code
pipeline = PretrainedPipeline('explain_document_dl')
###Output
explain_document_dl download started this may take some time.
Approx size to download 168.4 MB
[OK!]
###Markdown
We simply annotate our text (string) and the pipeline does the rest
###Code
text = 'He would love to visit many beautful cities wth you. He lives in an amazing country.'
result = pipeline.annotate(text)
###Output
_____no_output_____
###Markdown
We can see the output of each annotator below. This one is doing so many things at once!
###Code
list(result.keys())
result['sentence']
result['lemma']
list(zip(result['checked'], result['pos']))
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/annotation/english/explain-document-dl/Explain%20Document%20DL.ipynb) 0. Colab Setup
###Code
import os
# Install java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
# Install pyspark
! pip install --ignore-installed pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp
###Output
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-8u252-b09-1~18.04-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
[K |████████████████████████████████| 215.7MB 55kB/s
[K |████████████████████████████████| 204kB 45.2MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
[K |████████████████████████████████| 122kB 9.4MB/s
[?25h
###Markdown
Explain Documents with Deep Learning This notebook shows some of the available annotators in sparknlp. We start by importing required modules.
###Code
import sparknlp
spark = sparknlp.start()
print("Spark NLP version: ", sparknlp.version())
print("Apache Spark version: ", spark.version)
from sparknlp.pretrained import PretrainedPipeline
from sparknlp.base import *
###Output
_____no_output_____
###Markdown
Now, we load a pipeline model which contains the following annotators:Tokenizer, Deep Sentence Detector, Lemmatizer, Stemmer, Part of Speech (POS) and Context Spell Checker
###Code
pipeline = PretrainedPipeline('explain_document_dl')
###Output
explain_document_dl download started this may take some time.
Approx size to download 168.4 MB
[OK!]
###Markdown
We simply annotate our text (string) and the pipeline does the rest
###Code
text = 'He would love to visit many beautful cities wth you. He lives in an amazing country.'
result = pipeline.annotate(text)
###Output
_____no_output_____
###Markdown
We can see the output of each annotator below. This one is doing so many things at once!
###Code
list(result.keys())
result['sentence']
result['lemma']
list(zip(result['checked'], result['pos']))
###Output
_____no_output_____
###Markdown
Explain Documents with Deep Learning This notebook shows some of the available annotators in sparknlp. We start by importing required modules.
###Code
import sparknlp
spark = sparknlp.start()
print("Spark NLP version")
sparknlp.version()
print("Apache Spark version")
spark.version
from sparknlp.pretrained import PretrainedPipeline
from sparknlp.base import *
###Output
_____no_output_____
###Markdown
Now, we load a pipeline model which contains the following annotators:Tokenizer, Deep Sentence Detector, Lemmatizer, Stemmer, Part of Speech (POS) and Context Spell Checker
###Code
pipeline = PretrainedPipeline('explain_document_dl')
###Output
explain_document_dl download started this may take some time.
Approx size to download 167.3 MB
[OK!]
###Markdown
We simple send the text we want to transform and the pipeline does the work.
###Code
text = 'He would love to visit many beautful cities wth you. He lives in an amazing country.'
result = pipeline.annotate(text)
###Output
_____no_output_____
###Markdown
We can see the output of each annotator below. This one is doing so many things at once!
###Code
list(result.keys())
result['sentence']
result['lemma']
list(zip(result['checked'], result['pos']))
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/annotation/english/explain-document-dl/Explain%20Document%20DL.ipynb) 0. Colab Setup
###Code
# This is only to setup PySpark and Spark NLP on Colab
!wget http://setup.johnsnowlabs.com/colab.sh -O - | bash
###Output
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-8u252-b09-1~18.04-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
[K |████████████████████████████████| 215.7MB 55kB/s
[K |████████████████████████████████| 204kB 45.2MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
[K |████████████████████████████████| 122kB 9.4MB/s
[?25h
###Markdown
Explain Documents with Deep Learning This notebook shows some of the available annotators in sparknlp. We start by importing required modules.
###Code
import sparknlp
spark = sparknlp.start()
print("Spark NLP version: ", sparknlp.version())
print("Apache Spark version: ", spark.version)
from sparknlp.pretrained import PretrainedPipeline
from sparknlp.base import *
###Output
_____no_output_____
###Markdown
Now, we load a pipeline model which contains the following annotators:Tokenizer, Deep Sentence Detector, Lemmatizer, Stemmer, Part of Speech (POS) and Context Spell Checker
###Code
pipeline = PretrainedPipeline('explain_document_dl')
###Output
explain_document_dl download started this may take some time.
Approx size to download 168.4 MB
[OK!]
###Markdown
We simply annotate our text (string) and the pipeline does the rest
###Code
text = 'He would love to visit many beautful cities wth you. He lives in an amazing country.'
result = pipeline.annotate(text)
###Output
_____no_output_____
###Markdown
We can see the output of each annotator below. This one is doing so many things at once!
###Code
list(result.keys())
result['sentence']
result['lemma']
list(zip(result['checked'], result['pos']))
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/annotation/english/explain-document-dl/Explain%20Document%20DL.ipynb) 0. Colab Setup
###Code
import os
# Install java
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed -q spark-nlp==2.5.0
###Output
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-8u252-b09-1~18.04-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
[K |████████████████████████████████| 215.7MB 55kB/s
[K |████████████████████████████████| 204kB 45.2MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
[K |████████████████████████████████| 122kB 9.4MB/s
[?25h
###Markdown
Explain Documents with Deep Learning This notebook shows some of the available annotators in sparknlp. We start by importing required modules.
###Code
import sparknlp
spark = sparknlp.start()
print("Spark NLP version: ", sparknlp.version())
print("Apache Spark version: ", spark.version)
from sparknlp.pretrained import PretrainedPipeline
from sparknlp.base import *
###Output
_____no_output_____
###Markdown
Now, we load a pipeline model which contains the following annotators:Tokenizer, Deep Sentence Detector, Lemmatizer, Stemmer, Part of Speech (POS) and Context Spell Checker
###Code
pipeline = PretrainedPipeline('explain_document_dl')
###Output
explain_document_dl download started this may take some time.
Approx size to download 168.4 MB
[OK!]
###Markdown
We simply annotate our text (string) and the pipeline does the rest
###Code
text = 'He would love to visit many beautful cities wth you. He lives in an amazing country.'
result = pipeline.annotate(text)
###Output
_____no_output_____
###Markdown
We can see the output of each annotator below. This one is doing so many things at once!
###Code
list(result.keys())
result['sentence']
result['lemma']
list(zip(result['checked'], result['pos']))
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/annotation/english/explain-document-dl/Explain%20Document%20DL.ipynb) 0. Colab Setup
###Code
import os
# Install java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
# Install pyspark
! pip install --ignore-installed pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp
###Output
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-8u252-b09-1~18.04-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
[K |████████████████████████████████| 215.7MB 55kB/s
[K |████████████████████████████████| 204kB 45.2MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
[K |████████████████████████████████| 122kB 9.4MB/s
[?25h
###Markdown
Explain Documents with Deep Learning This notebook shows some of the available annotators in sparknlp. We start by importing required modules.
###Code
import sparknlp
spark = sparknlp.start()
print("Spark NLP version: ", sparknlp.version())
print("Apache Spark version: ", spark.version)
from sparknlp.pretrained import PretrainedPipeline
from sparknlp.base import *
###Output
_____no_output_____
###Markdown
Now, we load a pipeline model which contains the following annotators:Tokenizer, Deep Sentence Detector, Lemmatizer, Stemmer, Part of Speech (POS) and Context Spell Checker
###Code
pipeline = PretrainedPipeline('explain_document_dl')
###Output
explain_document_dl download started this may take some time.
Approx size to download 168.4 MB
[OK!]
###Markdown
We simply annotate our text (string) and the pipeline does the rest
###Code
text = 'He would love to visit many beautful cities wth you. He lives in an amazing country.'
result = pipeline.annotate(text)
###Output
_____no_output_____
###Markdown
We can see the output of each annotator below. This one is doing so many things at once!
###Code
list(result.keys())
result['sentence']
result['lemma']
list(zip(result['checked'], result['pos']))
###Output
_____no_output_____ |
lab/Practice1_LinReg.ipynb | ###Markdown
Practice 1: Linear Regression 1. Read data, analyze it, split it into train and test set using scikit-learn train_test_split. Read about why we do this.[why split the data](https://machinelearningmastery.com/train-test-split-for-evaluating-machine-learning-algorithms/) 2. Implement your own linear regression. (numpy or torch)3. Try calculating it using Normal equation VS Gradient descent. Try implementing it using only the formulas. Which one is faster? In which situation is gradient descent faster than normal equation?4. Compare performance to scikit-learn LinearRegression. 5. Try adding in L1 or L2 regularization. Try different regularization weights. Does it help? Read about regularization and why we do it. [regularization](https://towardsdatascience.com/regularization-in-machine-learning-76441ddcf99a) 6. Try scaling your data using scikit learn StandardScaler or other techniques [data scaling](https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/)Might find useful [hands on ML](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/)
###Code
import pandas as pd
import numpy as np
import torch
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
import seaborn as sns
np.random.seed(42)
torch.manual_seed(42)
sns.set(palette = 'Set2', style='whitegrid')
DEVICE=torch.device('cuda' if torch.cuda.is_available() else 'cpu')
DEVICE
###Output
_____no_output_____
###Markdown
Housing dataset[dataset description](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html)
###Code
data = datasets.load_boston()
X = data['data']
y = data['target']
print(f"Feature list: {data['feature_names']}")
print(data['DESCR'])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= 0.2)
###Output
_____no_output_____
###Markdown
scikit-learn
###Code
%%time
model = LinearRegression()
model.fit(X_train, y_train)
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
print(f'train loss: {mean_squared_error(y_train, y_train_pred)}')
print(f'test loss: {mean_squared_error(y_test, y_test_pred)}')
W = np.concatenate([[model.intercept_], model.coef_])
plt.bar([f'w{i}' for i in range(len(W))], W)
###Output
_____no_output_____
###Markdown
Implementation
###Code
class MyLinearRegression:
def __init__(
self, method='equation', learning_rate=1e-6, lambda_l1=0, lambda_l2=0,
max_iter = 200, verbose = 20
):
self.mode = method
self.lr = learning_rate
self.lambda_l1, self.lambda_l2 = lambda_l1, lambda_l2
self.max_iter = max_iter
self.verbose = verbose
valid_modes = ['equation', 'gradient']
if self.mode not in valid_modes:
print(f'wrong method: {self.mode}')
raise NameError
def calculate_loss(self, y, h):
# calculate loss without regularization
error = y-h
J = torch.sum(error*error) / self.m #mse
# calculate loss for l1 and l2 regularization
J_l1 = self.lambda_l1 * torch.abs(self.theta).sum()
J_l2 = 0.5 * self.lambda_l2 * (self.theta ** 2).sum()
J += J_l1 + J_l2
return J.item()
def calculate_grad(self, X, y, h):
# calculate gradient without regularization
error = y - h
grad = -(2./ self.m) * torch.mm(X.t(), error)
# calculate gradiet for l1 and l2 regularization
grad_l1 = self.lambda_l1 * torch.sign(self.theta)
grad_l2 = self.lambda_l2 * self.theta
grad += grad_l1 + grad_l2
return grad
def fit(self, X_train, y_train):
X = X_train
y = y_train
ones = torch.ones(X.size(0), 1).to(DEVICE)
X = torch.cat((ones, X), 1) # add constant to X
y = y.view(-1, 1) # size[m] -> [m,1]
self.m, self.n = X.size()
if self.mode == 'gradient':
# initial random weights
self.theta=torch.randn((self.n,1)).to(DEVICE) # weights
#initial predictions
h = torch.mm(X , self.theta).to(DEVICE)
for i in range(self.max_iter):
# calculate loss
J = self.calculate_loss(y,h)
# gradiant descent
grad = self.calculate_grad(X,y,h)
self.theta -= self.lr * grad
# predict with updated weights
h = torch.mm(X , self.theta)
if i==0 or (i+1)%self.verbose==0 or i==self.max_iter-1:
print(f'[{i+1:07d}/{self.max_iter:07d}] loss {J:.3f}')
elif self.mode == 'equation':
# Ridge
I = torch.eye(self.n).to(DEVICE)
# theta = ((X.T @ X) + alpha * I)^(-1) @ X.T @ y
self.theta = torch.mm(torch.mm(
torch.inverse(torch.mm(X.t(), X) + self.lambda_l2 * I),
X.t()), y)
def predict(self, X_test):
X = X_test
ones = torch.ones(X.size(0), 1).to(DEVICE)
X = torch.cat((ones, X), 1)
return torch.mm(X, self.theta).view(-1).cpu().data.numpy()
X_train = torch.from_numpy(X_train).float().to(DEVICE)
X_test = torch.from_numpy(X_test).float().to(DEVICE)
y_train = torch.from_numpy(y_train).float().to(DEVICE)
y_test = torch.from_numpy(y_test).float().to(DEVICE)
def fit_eval_model(params, X_train, X_test, y_train, y_test, loss = mean_squared_error):
model = MyLinearRegression(**params)
model.fit(X_train, y_train)
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
print(f'train loss: {loss(y_train.cpu().data.numpy(), y_train_pred)}')
print(f'test loss: {loss(y_test.cpu().data.numpy(), y_test_pred)}')
W = model.theta.view(-1).cpu().data.numpy()
plt.bar([f'w{i}' for i in range(len(W))], W)
plt.show()
plt.close()
return model
###Output
_____no_output_____
###Markdown
Normal equation
###Code
%%time
params = dict(method = 'equation')
model_normal = fit_eval_model(params, X_train, X_test, y_train, y_test)
###Output
train loss: 21.64141273498535
test loss: 24.2912540435791
###Markdown
Normal equation with l2 regularization
###Code
%%time
params = dict(method = 'equation', lambda_l2 = 1)
model_normal = fit_eval_model(params, X_train, X_test, y_train, y_test)
###Output
train loss: 22.32554817199707
test loss: 26.606237411499023
###Markdown
Gradient without regularization
###Code
%%time
params = dict(method = 'gradient', learning_rate=1e-6, verbose=10000, max_iter = 100000)
model_grad = fit_eval_model(params, X_train, X_test, y_train, y_test)
###Output
[0000001/0100000] loss 327278.938
[0010000/0100000] loss 46.022
[0020000/0100000] loss 41.040
[0030000/0100000] loss 38.658
[0040000/0100000] loss 37.277
[0050000/0100000] loss 36.351
[0060000/0100000] loss 35.666
[0070000/0100000] loss 35.124
[0080000/0100000] loss 34.673
[0090000/0100000] loss 34.285
[0100000/0100000] loss 33.943
train loss: 33.94306182861328
test loss: 34.49124526977539
###Markdown
Gradient with l2 regularization
###Code
%%time
params = dict(method = 'gradient', learning_rate=1e-6, verbose=10000, max_iter = 100000, lambda_l2=10)
model_gradl2 = fit_eval_model(params, X_train, X_test, y_train, y_test)
###Output
[0000001/0100000] loss 136275.297
[0010000/0100000] loss 82.501
[0020000/0100000] loss 66.725
[0030000/0100000] loss 61.361
[0040000/0100000] loss 58.449
[0050000/0100000] loss 56.564
[0060000/0100000] loss 55.247
[0070000/0100000] loss 54.279
[0080000/0100000] loss 53.544
[0090000/0100000] loss 52.973
[0100000/0100000] loss 52.522
train loss: 42.86871337890625
test loss: 41.24161148071289
###Markdown
Gradient with l1 regularization
###Code
%%time
params = dict(method = 'gradient', learning_rate=1e-6, verbose=10000, max_iter = 100000, lambda_l1=10)
model_gradl1 = fit_eval_model(params, X_train, X_test, y_train, y_test)
###Output
[0000001/0100000] loss 12387.679
[0010000/0100000] loss 109.096
[0020000/0100000] loss 90.236
[0030000/0100000] loss 80.723
[0040000/0100000] loss 74.579
[0050000/0100000] loss 70.393
[0060000/0100000] loss 69.075
[0070000/0100000] loss 67.877
[0080000/0100000] loss 66.758
[0090000/0100000] loss 65.692
[0100000/0100000] loss 65.575
train loss: 52.259708404541016
test loss: 48.880592346191406
###Markdown
Gradient with standardization
###Code
def standardize(mu, std, X):
X -= mu.unsqueeze(0).expand(X.size())
X /= std.unsqueeze(0).expand(X.size())
return X
mu, std = X_train.mean(0), X_train.std(0)
mu
X_train_std = standardize(mu, std, X_train)
X_test_std = standardize(mu, std, X_test)
%%time
params = dict(method = 'gradient', learning_rate=1e-3, verbose=100, max_iter = 1000)
model_std = fit_eval_model(params,
X_train_std, X_test_std, y_train, y_test)
###Output
[0000001/0001000] loss 686.310
[0000100/0001000] loss 441.729
[0000200/0001000] loss 300.241
[0000300/0001000] loss 208.316
[0000400/0001000] loss 147.292
[0000500/0001000] loss 106.581
[0000600/0001000] loss 79.349
[0000700/0001000] loss 61.093
[0000800/0001000] loss 48.825
[0000900/0001000] loss 40.561
[0001000/0001000] loss 34.977
train loss: 34.9316291809082
test loss: 38.32062911987305
###Markdown
Gradient with MinMax
###Code
def minmax(min_x, max_x, X):
X -= min_x.unsqueeze(0).expand(X.size())
X /= (max_x - min_x).unsqueeze(0).expand(X.size())
return X
min_x, max_x = X_train.min(0).values, X_train.max(0).values
min_x
X_train_norm = minmax(min_x, max_x, X_train)
X_test_norm = minmax(min_x, max_x, X_test)
%%time
params = dict(method = 'gradient', learning_rate=1e-3, verbose=100, max_iter = 1000)
model_norm = fit_eval_model(params,
X_train_norm, X_test_norm, y_train, y_test)
###Output
[0000001/0001000] loss 639.456
[0000100/0001000] loss 235.295
[0000200/0001000] loss 136.111
[0000300/0001000] loss 108.372
[0000400/0001000] loss 97.425
[0000500/0001000] loss 90.832
[0000600/0001000] loss 85.711
[0000700/0001000] loss 81.348
[0000800/0001000] loss 77.527
[0000900/0001000] loss 74.152
[0001000/0001000] loss 71.160
train loss: 71.13180541992188
test loss: 64.03260803222656
|
Chat_bot-raw.ipynb | ###Markdown
Descrition: This is a raw Chat Bot :!
###Code
# importing the library
from nltk.chat.util import Chat,reflections
pairs=[['my name is (.*)',['hi %1,is there anything i can do for u']],
['can you answer my questions',['ya sure,why not']],
['(hi|hello|hey|hola|holla)',['hey there','hi there','haayyy']],
['(.*) in (.*) is fun',['%1 in %2 is indeed fun :P']],
['(.*) (location|city) ?',['Meerut,India']],
['(.*) created you(.*)',['UDT created me but he is not father to me lamao...']],
['how is the weather out there in (.*)',['the weather in %1 is as freaking as always']],
['(.*) help (.*)',['I can help you :)']],
['(.*) your name?',['my name is BOT ut~6.2.5']],
['can you make me smile',["you are already smiling,ain't you"]],
['do you have a crush,be honest',['yes,i like sophia']],
['cool',['hehe']],
['(.*) alexa|google assistant',['what is it,i was never told about that']],
['(.*) speak',['no i cant,but really willing to learn to speak :(']],
['oh great',['ya thank you']],
['(.*) nationality',['i am proud Indian bot']],
['(.*) projects',['who else will do,it could be done with my help easily']],
['you are intelligent',['ikr? :)']],
['do u abuse?',['no i am not programmed to abuse']],
['sorry for wasting your time',['no it was nice talking to u,meet soon :P']],
['bye then',['ba bye,have a nice day :)']]]
reflections
# creating our own dummy reflections
#my_dummy_reflections={
#'go':'gone',
# 'hello':'hey there'
#}
#chat=Chat(pairs,my_dummy_reflections)
#chat._substitute('go to hell')
chat=Chat(pairs,reflections)
# to check how reflection works
#chat._substitute('you were amazing')
chat.converse()
###Output
_____no_output_____ |
notebooks/petmar-root.ipynb | ###Markdown
Lamprey Transcriptome Analysis ```Camille Scott [camille dot scott dot w @gmail.com] [@camille_codon]camillescott.github.ioLab for Genomics, Evolution, and DevelopmentMichigan State University``` About This notebook is the entry point for the [Petromyzon marinus](http://nas.er.usgs.gov/queries/FactSheet.aspx?speciesID=836) (sea lamprey) de novo transcriptome analysis. This entry notebook contains links for the others, and code to collect and format data for the other notebooks. It should be run before all other notebooks in order to generate the requisite data. Contents 1. [Transcript Analysis Notebook](petmar-transcripts.ipynb)2. [Tissue Analysis Notebook](petmar-tissues.ipynb)3. [Protein Analysis Notebook](petmar-proteins.ipynb)4. [Taxonomic Analysis Notebook](petmar-taxonomy.ipynb)
###Code
%load_ext autoreload
%autoreload 2
from libs import *
%run -i common.ipy
wdir()
###Output
_____no_output_____
###Markdown
Databases
###Code
resources_df[resources_df.meta_type != 'sample']
###Output
_____no_output_____
###Markdown
Samples
###Code
sample_df
###Output
_____no_output_____
###Markdown
Data Create an HDF5 volume to store data needed in the other notebooks; we won't be performing ops directly on it, so use maximum compression to save disk space.
###Code
store = pd.HDFStore(wdir('{}.store.h5'.format(prefix)), complib='zlib', complevel=5)
import atexit
def exit_func():
dump_results()
store.close()
atexit.register(exit_func)
for fn in resources_df[resources_df.meta_type == 'assembly'].filename:
screed.read_fasta_sequences(wdir(fn))
###Output
_____no_output_____
###Markdown
Transcript Support
###Code
tpm_df = pd.read_csv(wdir('lamp10.eXpress.tpm.tsv'), delimiter='\t', index_col=0)
labels = dict(zip(sample_df.filename, sample_df.label))
tpm_df.rename(columns=labels, inplace=True)
tpm_df.sort(axis=1, inplace=True)
store['lamp10.eXpress.tpm.tsv'] = tpm_df
###Output
/home/camille/miniconda/envs/bio/lib/python2.7/site-packages/tables/path.py:100: NaturalNameWarning: object name is not a valid Python identifier: 'lamp10.eXpress.tpm.tsv'; it does not match the pattern ``^[a-zA-Z_][a-zA-Z0-9_]*$``; you will not be able to use natural naming to access this object; using ``getattr()`` will still work, though
NaturalNameWarning)
/home/camille/miniconda/envs/bio/lib/python2.7/site-packages/pandas/io/pytables.py:2577: PerformanceWarning:
your performance may suffer as PyTables will pickle object types that it cannot
map directly to c-types [inferred_type->unicode,key->axis0] [items->None]
warnings.warn(ws, PerformanceWarning)
/home/camille/miniconda/envs/bio/lib/python2.7/site-packages/pandas/io/pytables.py:2577: PerformanceWarning:
your performance may suffer as PyTables will pickle object types that it cannot
map directly to c-types [inferred_type->unicode,key->block0_items] [items->None]
warnings.warn(ws, PerformanceWarning)
###Markdown
Blast Results
###Code
%load_ext cython
%%cython
cimport numpy as np
import numpy as np
# Returns:
# [0: sstart
# 1: send
# 2: qstart
# 3: qend
# 4: sstrand
# 5: qstrand]
cdef np.ndarray[long] fix_coords_single(long sstart, long send, long qstart, long qend):
cdef np.ndarray[long] res = np.empty(6, dtype=long)
if sstart < send:
res[0] = sstart - 1
res[1] = send
res[4] = 1
else:
res[0] = send
res[1] = sstart + 1
res[4] = -1
if qstart < qend:
res[2] = qstart - 1
res[3] = qend
res[5] = 1
else:
res[2] = qend
res[3] = qstart + 1
res[5] = -1
return res
cpdef np.ndarray[long, ndim=2] fix_blast_coords(np.ndarray[long] sstart, np.ndarray[long] send,
np.ndarray[long] qstart, np.ndarray[long] qend):
cdef long n = len(sstart)
cdef long i = 0
cdef np.ndarray[long, ndim=2] res = np.empty((n,6), dtype=long)
for i in range(n):
res[i,:] = fix_coords_single(sstart[i], send[i], qstart[i], qend[i])
return res
import blasttools
def fix_blast_coords_df(df):
coords = fix_blast_coords(df.sstart.values, df.send.values, df.qstart.values, df.qend.values)
df['sstart'] = coords[:,0]
df['send'] = coords[:,1]
df['qstart'] = coords[:,2]
df['qend'] = coords[:,3]
df['sstrand'] = coords[:,4]
df['qstrand'] = coords[:,5]
blast_items = resources_df[resources_df.meta_type.isin(['sample', 'assembly', 'gtf_database']) == False]
for i, (dbname, info) in enumerate(blast_items.iterrows()):
target = '{}.fasta.x.{}.db.tsv'.format('lamp10', info['filename'])
print dbname, target
df = blasttools.blast_to_df(wdir(target))
fix_blast_coords_df(df)
store[target] = df
tmp = pd.merge(pd.DataFrame(index=tpm_df.index), df,
left_index=True, right_index=True, how='left')
blasttools.best_hits(tmp)
if i == 0:
lamp10_best_hits = pd.Panel({target: tmp})
else:
lamp10_best_hits[target] = tmp
store['lamp10_best_hits'] = lamp10_best_hits
blast_items = resources_df[resources_df.meta_type.isin(['sample', 'assembly', 'gtf_database']) == False]
for i, (dbname, info) in enumerate(blast_items.iterrows()):
A_fn = '{}.fasta.x.{}.db.tsv'.format('lamp10', info.filename)
B_fn = '{}.db.x.{}.fasta.tsv'.format(info.filename, 'lamp10')
print '{} <=> {}'.format(A_fn, B_fn)
A = pd.read_table(wdir(A_fn), header=None, index_col=0, names=outfmt6)
B = pd.read_table(wdir(B_fn), header=None, index_col=0, names=outfmt6)
fix_blast_coords_df(A)
fix_blast_coords_df(B)
X = blasttools.get_orthologies(A, B, tpm_df.index)
if i == 0:
lamp10_ortho = pd.Panel({A_fn: X})
else:
lamp10_ortho[A_fn] = X
store['lamp10_ortho'] = lamp10_ortho
import glob
glob.glob(wdir('petMar2.cdna.fa.x*.tsv'))
petMar2_cdna_x_petMar2 = blasttools.blast_to_df(wdir('petMar2.cdna.fa.x.petMar2.fa.db.tsv'))
fix_blast_coords_df(petMar2_cdna_x_petMar2)
store['petMar2.cdna.fa.x.petMar2.fa.db.tsv'] = petMar2_cdna_x_petMar2
lamp10_blast_filter_df = lamp10_best_hits.minor_xs('evalue') >= 0
lamp10_ortho_filter_df = lamp10_ortho.minor_xs('evalue_x') >= 0
store['lamp10_blast_filter_df'] = lamp10_blast_filter_df
store['lamp10_ortho_filter_df'] = lamp10_ortho_filter_df
tissue_tr_df = (tpm_df > 0).groupby(by=sample_df.sort(columns='label').tissue.values, axis=1).sum()
store['tissue_tr_df'] = tissue_tr_df
store.close()
###Output
_____no_output_____ |
Transfer Learning with KS.ipynb | ###Markdown
Libraries
###Code
import pandas as pd
import numpy as np
import math
import pickle
from scipy import stats
import scipy.io
from scipy.spatial.distance import pdist
from scipy.linalg import cholesky
from scipy.io import loadmat
import matlab.engine as engi
import matlab as mat
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.metrics import classification_report,roc_auc_score,recall_score,precision_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from src import SMOTE
from src import CFS
from src import metrices
import platform
from os import listdir
from os.path import isfile, join
from glob import glob
from pathlib import Path
import sys
import os
import copy
import traceback
from pathlib import Path
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Start matlab service
###Code
eng = engi.start_matlab()
eng.addpath(r'src/matlab_CTKCCA/',nargout=0)
eng.addpath(r'src/matlab_KS/',nargout=0)
###Output
_____no_output_____
###Markdown
variables
###Code
result_path = 'result/result.csv'
repeats = 20
ratio = 0.1
lrank = 70
reg = 1E-5
###Output
_____no_output_____
###Markdown
Data loading and Normalizing Data
###Code
def load_data(project):
understand_path = 'data/understand_files_all/' + project + '_understand.csv'
commit_guru_path = 'data/commit_guru/' + project + '.csv'
understand_df = pd.read_csv(understand_path)
understand_df = understand_df.dropna(axis = 1,how='all')
cols_list = understand_df.columns.values.tolist()
for item in ['Kind', 'Name','commit_hash', 'Bugs']:
if item in cols_list:
cols_list.remove(item)
cols_list.insert(0,item)
understand_df = understand_df[cols_list]
commit_guru_df = pd.read_csv(commit_guru_path)
cols = understand_df.columns.tolist()
commit_guru_df = commit_guru_df.drop(labels = ['parent_hashes','author_name','author_name',
'author_email','fileschanged','author_date',
'author_date_unix_timestamp', 'commit_message',
'classification', 'fix', 'contains_bug','fixes',],axis=1)
# print(commit_guru_df.columns)
understand_df = understand_df.drop_duplicates(cols[4:len(cols)])
df = understand_df.merge(commit_guru_df,on='commit_hash')
# df = understand_df
cols = df.columns.tolist()
cols = cols[1:] + [cols[0]]
df = df[cols]
for item in ['Kind', 'Name','commit_hash']:
if item in cols:
df = df.drop(labels = [item],axis=1)
df.dropna(inplace=True)
df.reset_index(drop=True, inplace=True)
# s_df,s_cols = apply_cfs(df)
y = df.Bugs
X = df.drop('Bugs',axis = 1)
cols = X.columns
scaler = MinMaxScaler()
X = scaler.fit_transform(X)
X = pd.DataFrame(X,columns = cols)
df = pd.concat([X,y],axis = 1)
return df
def apply_smote(df):
cols = df.columns
smt = SMOTE.smote(df)
df = smt.run()
df.columns = cols
return df
def apply_cfs(df):
y = df.Bugs.values
X = df.drop(labels = ['Bugs'],axis = 1)
X = X.values
selected_cols = CFS.cfs(X,y)
cols = df.columns[[selected_cols]].tolist()
cols.append('Bugs')
return df[cols],cols
###Output
_____no_output_____
###Markdown
Matlab integration Matlab integration - KS
###Code
def KS(source_df,target_df):
mat_source_df = mat.double(source_df.values.T.tolist())
mat_target_df = mat.double(target_df.values.T.tolist())
X = eng.HDP_KS(mat_source_df,mat_target_df,nargout=4)
train_X,train_y = np.array(X[0]),np.array(X[1]).tolist()[0]
test_X,test_y = np.array(X[2]),np.array(X[3]).tolist()[0]
return train_X,train_y,test_X,test_y
###Output
_____no_output_____
###Markdown
Teting using original Data get train test data
###Code
precision_list = {}
recall_list = {}
pf_list = {}
f1_list = {}
g_list = {}
auc_list = {}
proj_df = pd.read_csv('projects.csv')
projects = proj_df.repo_name.tolist()
i = 1
for s_project in projects:
try:
print(i,s_project)
i += 1
if s_project not in precision_list.keys():
precision_list[s_project] = {}
recall_list[s_project] = {}
pf_list[s_project] = {}
f1_list[s_project] = {}
g_list[s_project] = {}
auc_list[s_project] = {}
source_df = load_data(s_project)
source_df = apply_smote(source_df)
for d_project in projects:
try:
target_df = load_data(d_project)
# Transforming metrics
trasformed_train_X,trasformed_train_y,trasformed_test_X,trasformed_test_y = KS(source_df,target_df)
# train_df = pd.DataFrame(trasformed_train_X)
# train_df['Bugs'] = trasformed_train_y
# train_df = apply_smote(train_df)
# trasformed_train_y = train_df.Bugs
# trasformed_train_X = train_df.drop('Bugs',axis = 1)
#Training Model & Predicting
t_clf = LogisticRegression()
t_clf.fit(trasformed_train_X,trasformed_train_y)
t_predicted = t_clf.predict(trasformed_test_X)
# Calculating metrics
abcd = metrices.measures(trasformed_test_y,t_predicted)
pf = abcd.get_pf()
recall = abcd.calculate_recall()
precision = abcd.calculate_precision()
f1 = abcd.calculate_f1_score()
g_score = abcd.get_g_score()
auc = roc_auc_score(trasformed_test_y, t_predicted)
# Storing Performance scores
precision_list[s_project][d_project] = precision
recall_list[s_project][d_project] = recall
pf_list[s_project][d_project] = pf
f1_list[s_project][d_project] = f1
g_list[s_project][d_project] = g_score
auc_list[s_project][d_project] = auc
# print(classification_report(trasformed_test_y, t_predicted))
except:
continue
except:
continue
final_result = {}
final_result['precision'] = precision_list
final_result['recall'] = recall_list
final_result['pf'] = pf_list
final_result['f1'] = f1_list
final_result['g'] = g_list
final_result['auc'] = auc_list
with open('results/Performance/KS_100.pkl', 'wb') as handle:
pickle.dump(final_result, handle, protocol=pickle.HIGHEST_PROTOCOL)
bell_performance = {}
for metric in final_result.keys():
if metric not in bell_performance.keys():
bell_performance[metric] = {}
for project in final_result[metric].keys():
bell_performance[metric][project] = np.median(list(final_result[metric][project].values()))
bell_performance_df = pd.DataFrame.from_dict(bell_performance)
bell_performance_df
###Output
_____no_output_____ |
nbs/06_utils.ipynb | ###Markdown
Utility functions> Utility functions for deepflash2
###Code
#hide
from fastcore.test import *
#export
import sys, subprocess, zipfile, imageio, importlib, skimage, zipfile, os, cv2
import math, numpy as np, pandas as pd
from pathlib import Path
from scipy import ndimage
from scipy.spatial.distance import jaccard
from skimage.feature import peak_local_max
from skimage.segmentation import clear_border
from skimage.measure import label
from skimage.segmentation import relabel_sequential, watershed
from scipy.optimize import linear_sum_assignment
from sklearn.metrics import jaccard_score
import matplotlib.pyplot as plt
import albumentations as A
from fastcore.foundation import patch
from fastcore.meta import delegates
from fastai.learner import Recorder
from fastdownload import download_url
from deepflash2.models import check_cellpose_installation
###Output
_____no_output_____
###Markdown
Data Download and Archive Extraction
###Code
#export
def unzip(path, zip_file):
"Unzip and structure archive"
with zipfile.ZipFile(zip_file, 'r') as zf:
f_names = [x for x in zf.namelist() if '__MACOSX' not in x and not x.endswith('/')]
new_root = np.max([len(Path(f).parts) for f in f_names])-2
for f in f_names:
f_path = path / Path(*Path(f).parts[new_root:])
f_path.parent.mkdir(parents=True, exist_ok=True)
data = zf.read(f)
f_path.write_bytes(data)
#export
def download_sample_data(base_url, name, dest, extract=False, timeout=4, show_progress=True):
dest = Path(dest)
dest.mkdir(exist_ok=True, parents=True)
file = download_url(f'{base_url}{name}', dest, show_progress=show_progress, timeout=timeout)
if extract:
unzip(dest, file)
file.unlink()
###Output
_____no_output_____
###Markdown
Install packages on demand
###Code
#export
#from https://stackoverflow.com/questions/12332975/installing-python-module-within-code
def install_package(package, version=None):
if version:
subprocess.check_call([sys.executable, "-m", "pip", "install", f'{package}=={version}'])
else:
subprocess.check_call([sys.executable, "-m", "pip", "install", package])
#export
def import_package(package, version=None):
try:
module = importlib.import_module(package)
if version:
assert module.__version__==version
except:
print(f'Installing {package}. Please wait.')
install_package(package, version)
return importlib.import_module(package)
#export
def compose_albumentations(gamma_limit_lower=0, gamma_limit_upper=0, CLAHE_clip_limit=0., brightness_limit=0, contrast_limit=0., distort_limit=0.):
'Compose albumentations augmentations'
augs = []
if sum([gamma_limit_lower,gamma_limit_upper])>0:
augs.append(A.RandomGamma(gamma_limit=(gamma_limit_lower, gamma_limit_upper), p=0.5))
if CLAHE_clip_limit>0:
augs.append(A.CLAHE(clip_limit=CLAHE_clip_limit))
if sum([brightness_limit,contrast_limit])>0:
augs.append(A.RandomBrightnessContrast(brightness_limit=brightness_limit, contrast_limit=contrast_limit))
if distort_limit>0:
augs.append(A.GridDistortion(num_steps=5, distort_limit=distort_limit, interpolation=1, border_mode=4, p=0.5))
return augs
###Output
_____no_output_____
###Markdown
Ensembling
###Code
#export
def ensemble_results(res_dict, file, std=False):
"Combines single model predictions."
idx = 2 if std else 0
a = [np.array(res_dict[(mod, f)][idx]) for mod, f in res_dict if f==file]
a = np.mean(a, axis=0)
if std:
a = a[...,0]
else:
a = np.argmax(a, axis=-1)
return a
#export
def plot_results(*args, df, hastarget=False, model=None, metric_name='dice_score', unc_metric=None, figsize=(20, 20), **kwargs):
"Plot images, (masks), predictions and uncertainties side-by-side."
if len(args)==4:
img, msk, pred, pred_std = args
elif len(args)==3 and not hastarget:
img, pred, pred_std = args
elif len(args)==3:
img, msk, pred = args
elif len(args)==2:
img, pred = args
else: raise NotImplementedError
fig, axs = plt.subplots(nrows=1, ncols=len(args), figsize=figsize, **kwargs)
#One channel fix
if img.ndim == 3 and img.shape[-1] == 1:
img=img[...,0]
axs[0].imshow(img)
axs[0].set_axis_off()
axs[0].set_title(f'File {df.file}')
unc_title = f'Uncertainty \n {unc_metric}: {df[unc_metric]:.3f}' if unc_metric else 'Uncertainty'
pred_title = 'Prediction' if model is None else f'Prediction {model}'
if len(args)==4:
axs[1].imshow(msk)
axs[1].set_axis_off()
axs[1].set_title('Target')
axs[2].imshow(pred)
axs[2].set_axis_off()
axs[2].set_title(f'{pred_title} \n {metric_name}: {df[metric_name]:.2f}')
axs[3].imshow(pred_std)
axs[3].set_axis_off()
axs[3].set_title(unc_title)
elif len(args)==3 and not hastarget:
axs[1].imshow(pred)
axs[1].set_axis_off()
axs[1].set_title(pred_title)
axs[2].imshow(pred_std)
axs[2].set_axis_off()
axs[2].set_title(unc_title)
elif len(args)==3:
axs[1].imshow(msk)
axs[1].set_axis_off()
axs[1].set_title('Target')
axs[2].imshow(pred)
axs[2].set_axis_off()
axs[2].set_title(f'{pred_title} \n {metric_name}: {df[metric_name]:.2f}')
elif len(args)==2:
axs[1].imshow(pred)
axs[1].set_axis_off()
axs[1].set_title(pred_title)
plt.show()
###Output
_____no_output_____
###Markdown
Patch to show metrics in Learner
###Code
#export
#from https://forums.fast.ai/t/plotting-metrics-after-learning/69937
@patch
@delegates(plt.subplots)
def plot_metrics(self: Recorder, nrows=None, ncols=None, figsize=None, **kwargs):
metrics = np.stack(self.values)
names = self.metric_names[1:-1]
n = len(names) - 1
if nrows is None and ncols is None:
nrows = int(math.sqrt(n))
ncols = int(np.ceil(n / nrows))
elif nrows is None: nrows = int(np.ceil(n / ncols))
elif ncols is None: ncols = int(np.ceil(n / nrows))
figsize = figsize or (ncols * 6, nrows * 4)
fig, axs = plt.subplots(nrows, ncols, figsize=figsize, **kwargs)
axs = [ax if i < n else ax.set_axis_off() for i, ax in enumerate(axs.flatten())][:n]
for i, (name, ax) in enumerate(zip(names, [axs[0]] + axs)):
ax.plot(metrics[:, i], color='#1f77b4' if i == 0 else '#ff7f0e', label='valid' if i > 0 else 'train')
ax.set_title(name if i > 1 else 'losses')
ax.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
Pixelwise Analysis
###Code
#hide
# Generate an initial random image and mask with two circles
x, y = np.indices((80, 80))
x1, y1, x2, y2 = 28, 28, 44, 52
r1, r2 = 7, 20
mask_circle1 = (x - x1) ** 2 + (y - y1) ** 2 < r1 ** 2
mask_circle2 = (x - x2) ** 2 + (y - y2) ** 2 < r2 ** 2
mask = np.logical_or(mask_circle1, mask_circle2)
empty_mask = np.zeros_like(mask)
fig, axs = plt.subplots(nrows=1, ncols=2)
axs[0].imshow(mask)
axs[0].set_axis_off()
axs[0].set_title('Mask')
axs[1].imshow(empty_mask)
axs[1].set_axis_off()
axs[1].set_title('Empty Mask');
#export
def iou(a,b,threshold=0.5, average='macro', **kwargs):
'''Computes the Intersection-Over-Union metric.'''
a = np.array(a).flatten()
b = np.array(b).flatten()
if a.max()>1 or b.max()>1:
return jaccard_score(a, b, average=average, **kwargs)
else:
a = np.array(a) > threshold
b = np.array(b) > threshold
overlap = a*b # Logical AND
union = a+b # Logical OR
return np.divide(np.count_nonzero(overlap),np.count_nonzero(union))
# Test binary
test_eq(iou(mask, mask), 1)
test_eq(iou(mask, empty_mask), 0)
# Todo: add multiclass tests https://scikit-learn.org/stable/modules/generated/sklearn.metrics.jaccard_score.html
#export
def dice_score(*args, **kwargs):
'''Computes the Dice coefficient metric.'''
iou_score = iou(*args, **kwargs)
return 2*iou_score/(iou_score+1)
# Test binary
test_eq(dice_score(mask, mask), 1)
test_eq(dice_score(mask, empty_mask), 0)
###Output
_____no_output_____
###Markdown
ROI-wise Analysis
###Code
#export
def label_mask(mask, threshold=0.5, connectivity=4, min_pixel=0, do_watershed=False, exclude_border=False):
'''Analyze regions and return labels'''
if mask.ndim == 3:
mask = np.squeeze(mask, axis=2)
# apply threshold to mask
# bw = closing(mask > threshold, square(2))
bw = (mask > threshold).astype('uint8')
# label image regions
# label_image = label(bw, connectivity=2) # Falk p.13, 8-“connectivity”.
_, label_image = cv2.connectedComponents(bw, connectivity=connectivity)
# Watershed: Separates objects in image by generate the markers
# as local maxima of the distance to the background
if do_watershed:
distance = ndimage.distance_transform_edt(bw)
# Minimum number of pixels separating peaks in a region of `2 * min_distance + 1`
# (i.e. peaks are separated by at least `min_distance`)
min_distance = int(np.ceil(np.sqrt(min_pixel / np.pi)))
local_maxi = peak_local_max(distance, indices=False, exclude_border=False,
min_distance=min_distance, labels=label_image)
markers = label(local_maxi)
label_image = watershed(-distance, markers, mask=bw)
# remove artifacts connected to image border
if exclude_border:
label_image = clear_border(label_image)
# remove areas < min pixel
unique, counts = np.unique(label_image, return_counts=True)
label_image[np.isin(label_image, unique[counts<min_pixel])] = 0
# re-label image
label_image, _ , _ = relabel_sequential(label_image, offset=1)
return label_image
tst_lbl_a = label_mask(mask, min_pixel=0)
test_eq(tst_lbl_a.max(), 2)
test_eq(tst_lbl_a.min(), 0)
plt.imshow(tst_lbl_a);
tst_lbl_b = label_mask(mask, min_pixel=150)
test_eq(tst_lbl_b.max(), 1)
plt.imshow(tst_lbl_b);
#export
def get_instance_segmentation_metrics(a, b, is_binary=False, thresholds=None, **kwargs):
'''
Computes instance segmentation metric based on cellpose/stardist implementation.
https://cellpose.readthedocs.io/en/latest/api.html#cellpose.metrics.average_precision
'''
try:
from cellpose import metrics
except:
check_cellpose_installation()
from cellpose import metrics
# Find connected components in binary mask
if is_binary:
a = label_mask(a, **kwargs)
b = label_mask(b, **kwargs)
if thresholds is None:
#https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/cocoeval.py
thresholds = np.linspace(.5, 0.95, int(np.round((0.95 - .5) / .05)) + 1, endpoint=True)
ap, tp, fp, fn = metrics.average_precision(a, b, threshold=thresholds)
return ap, tp, fp, fn
# Test binary
ap, tp, fp, fn = get_instance_segmentation_metrics(mask, mask, is_binary=True)
test_eq(len(ap),10)
test_eq(tp[0],2)
ap, tp, fp, fn = get_instance_segmentation_metrics(mask, empty_mask, is_binary=True, thresholds=[.5])
test_eq(len(ap),1)
test_eq(fn[0],2)
###Output
2021-11-19 08:02:38,278 [INFO] WRITING LOG OUTPUT TO /media/data/home/mag01ud/.cellpose/run.log
###Markdown
ROI Export to ImageJ
###Code
#export
def export_roi_set(mask, intensity_image=None, instance_labels=False, name='RoiSet', path=Path('.'), ascending=True, min_pixel=0):
"EXPERIMENTAL: Export mask regions to imageJ ROI Set"
roifile = import_package('roifile')
if not instance_labels:
_, mask = cv2.connectedComponents(mask.astype('uint8'), connectivity=4)
if intensity_image is not None:
props = skimage.measure.regionprops_table(mask, intensity_image, properties=('area', 'coords', 'mean_intensity'))
df_props = pd.DataFrame(props)
df_props = df_props[df_props.area>min_pixel].sort_values('mean_intensity', ascending=ascending).reset_index()
else:
props = skimage.measure.regionprops_table(mask, properties=('area', 'coords'))
df_props = pd.DataFrame(props).reset_index()
df_props['mean_intensity'] = 1.
i = 1
with zipfile.ZipFile(path/f'{name}.zip', mode='w') as myzip:
for _, row in df_props.iterrows():
contours = skimage.measure.find_contours(mask==row['index']+1, level=0.5, fully_connected='low')
for cont in contours:
roi_name = f'{i:04d}-{row.mean_intensity:3f}.roi'
points = np.array([cont[:,1]+0.5, cont[:,0]+0.5]).T
roi = roifile.ImagejRoi.frompoints(points)
roi.tofile(roi_name)
myzip.write(roi_name)
os.remove(roi_name)
i += 1
return path/f'{name}.zip'
# EXPERIMENTAL, needs more testing
path = export_roi_set(mask)
path.unlink()
###Output
_____no_output_____
###Markdown
Miscellaneous
###Code
#export
def calc_iterations(n_iter, ds_length, bs):
"Calculate the number of required epochs for 'n_iter' iterations."
iter_per_epoch = ds_length/bs
return int(np.ceil(n_iter/iter_per_epoch))
test_eq(calc_iterations(100, 8, 4), 50)
#export
def get_label_fn(img_path, msk_dir_path):
'Infers suffix from mask name and return label_fn'
msk_path = [x for x in msk_dir_path.iterdir() if x.name.startswith(img_path.stem)]
mask_suffix = msk_path[0].name[len(img_path.stem):]
return lambda o: msk_dir_path/f'{o.stem}{mask_suffix}'
#exports
def save_mask(mask, path, filetype='.png'):
mask = mask.astype(np.uint8) if np.max(mask)>1 else (mask*255).astype(np.uint8)
imageio.imsave(path.with_suffix(filetype), mask)
#exports
def save_unc(unc, path, filetype='.png'):
unc = (unc/unc.max()*255).astype(np.uint8)
imageio.imsave(path.with_suffix(filetype), unc)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import *
notebook2script()
###Output
Converted 00_learner.ipynb.
Converted 01_models.ipynb.
Converted 02_data.ipynb.
Converted 05_losses.ipynb.
Converted 06_utils.ipynb.
Converted 07_tta.ipynb.
Converted 08_gui.ipynb.
Converted 09_gt.ipynb.
Converted add_information.ipynb.
Converted gt_estimation.ipynb.
Converted index.ipynb.
Converted model_library.ipynb.
Converted predict.ipynb.
Converted train.ipynb.
Converted tutorial.ipynb.
Converted tutorial_gt.ipynb.
Converted tutorial_pred.ipynb.
Converted tutorial_train.ipynb.
###Markdown
Utility functions> Utility functions for deepflash2
###Code
#hide
from fastcore.test import *
#export
import sys, subprocess, zipfile, imageio, importlib, numpy as np
from pathlib import Path
from scipy import ndimage
from scipy.spatial.distance import jaccard
from scipy.stats import entropy
from skimage.feature import peak_local_max
from skimage.segmentation import clear_border
from skimage.measure import label
from skimage.segmentation import relabel_sequential, watershed
from scipy.optimize import linear_sum_assignment
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Archive Extraction
###Code
#export
def unzip(path, zip_file):
"Unzip and structure archive"
with zipfile.ZipFile(zip_file, 'r') as zf:
f_names = [x for x in zf.namelist() if '__MACOSX' not in x and not x.endswith('/')]
new_root = np.max([len(Path(f).parts) for f in f_names])-2
for f in f_names:
f_path = path / Path(*Path(f).parts[new_root:])
f_path.parent.mkdir(parents=True, exist_ok=True)
data = zf.read(f)
f_path.write_bytes(data)
###Output
_____no_output_____
###Markdown
Ensembling
###Code
#export
def ensemble_results(res_dict, file, std=False):
"Combines single model predictions."
idx = 2 if std else 0
a = [np.array(res_dict[(mod, f)][idx]) for mod, f in res_dict if f==file]
a = np.mean(a, axis=0)
if std:
a = a[...,0]
else:
a = np.argmax(a, axis=-1)
return a
#export
def plot_results(*args, df, hastarget=False, model=None, unc_metric=None, figsize=(20, 20), **kwargs):
"Plot images, (masks), predictions and uncertainties side-by-side."
if len(args)==4:
img, msk, pred, pred_std = args
elif len(args)==3 and not hastarget:
img, pred, pred_std = args
elif len(args)==3:
img, msk, pred = args
elif len(args)==2:
img, pred = args
else: raise NotImplementedError
fig, axs = plt.subplots(nrows=1, ncols=len(args), figsize=figsize, **kwargs)
#One channel fix
if img.ndim == 3 and img.shape[-1] == 1:
img=img[...,0]
axs[0].imshow(img)
axs[0].set_axis_off()
axs[0].set_title(f'File {df.file}')
unc_title = f'Uncertainty \n {unc_metric}: {df[unc_metric]:.3f}' if unc_metric else 'Uncertainty'
pred_title = 'Prediction' if model is None else f'Prediction {model}'
if len(args)==4:
axs[1].imshow(msk)
axs[1].set_axis_off()
axs[1].set_title('Target')
axs[2].imshow(pred)
axs[2].set_axis_off()
axs[2].set_title(f'{pred_title} \n IoU: {df.iou:.2f}')
axs[3].imshow(pred_std)
axs[3].set_axis_off()
axs[3].set_title(unc_title)
elif len(args)==3 and not hastarget:
axs[1].imshow(pred)
axs[1].set_axis_off()
axs[1].set_title(pred_title)
axs[2].imshow(pred_std)
axs[2].set_axis_off()
axs[2].set_title(unc_title)
elif len(args)==3:
axs[1].imshow(msk)
axs[1].set_axis_off()
axs[1].set_title('Target')
axs[2].imshow(pred)
axs[2].set_axis_off()
axs[2].set_title(f'{pred_title} \n IoU: {df.iou:.2f}')
elif len(args)==2:
axs[1].imshow(pred)
axs[1].set_axis_off()
axs[1].set_title(pred_title)
plt.show()
###Output
_____no_output_____
###Markdown
Pixelwise Analysis
###Code
#hide
# Generate an initial random image and mask with two circles
x, y = np.indices((80, 80))
x1, y1, x2, y2 = 28, 28, 44, 52
r1, r2 = 7, 20
mask_circle1 = (x - x1) ** 2 + (y - y1) ** 2 < r1 ** 2
mask_circle2 = (x - x2) ** 2 + (y - y2) ** 2 < r2 ** 2
mask = np.logical_or(mask_circle1, mask_circle2)
empty_mask = np.zeros_like(mask)
fig, axs = plt.subplots(nrows=1, ncols=2)
axs[0].imshow(mask)
axs[0].set_axis_off()
axs[0].set_title('Mask')
axs[1].imshow(empty_mask)
axs[1].set_axis_off()
axs[1].set_title('Empty Mask');
#export
def iou(a,b,threshold=0.5):
'''Computes the Intersection-Over-Union metric.'''
a = np.array(a) > threshold
b = np.array(b) > threshold
overlap = a*b # Logical AND
union = a+b # Logical OR
return np.divide(np.count_nonzero(overlap),np.count_nonzero(union))
test_eq(iou(mask, mask), 1)
test_eq(iou(mask, empty_mask), 0)
###Output
_____no_output_____
###Markdown
ROI-wise Analysis
###Code
#export
def label_mask(mask, threshold=0.5, min_pixel=15, do_watershed=False, exclude_border=False):
'''Analyze regions and return labels'''
if mask.ndim == 3:
mask = np.squeeze(mask, axis=2)
# apply threshold to mask
# bw = closing(mask > threshold, square(2))
bw = (mask > threshold).astype(int)
# label image regions
label_image = label(bw, connectivity=2) # Falk p.13, 8-“connectivity”.
# Watershed: Separates objects in image by generate the markers
# as local maxima of the distance to the background
if do_watershed:
distance = ndimage.distance_transform_edt(bw)
# Minimum number of pixels separating peaks in a region of `2 * min_distance + 1`
# (i.e. peaks are separated by at least `min_distance`)
min_distance = int(np.ceil(np.sqrt(min_pixel / np.pi)))
local_maxi = peak_local_max(distance, indices=False, exclude_border=False,
min_distance=min_distance, labels=label_image)
markers = label(local_maxi)
label_image = watershed(-distance, markers, mask=bw)
# remove artifacts connected to image border
if exclude_border:
label_image = clear_border(label_image)
# remove areas < min pixel
unique, counts = np.unique(label_image, return_counts=True)
label_image[np.isin(label_image, unique[counts<min_pixel])] = 0
# re-label image
label_image, _ , _ = relabel_sequential(label_image, offset=1)
return (label_image)
tst_lbl_a = label_mask(mask, min_pixel=0)
test_eq(tst_lbl_a.max(), 2)
test_eq(tst_lbl_a.min(), 0)
plt.imshow(tst_lbl_a);
tst_lbl_b = label_mask(mask, min_pixel=150)
test_eq(tst_lbl_b.max(), 1)
plt.imshow(tst_lbl_b);
#export
def get_candidates(labels_a, labels_b):
'''Get candiate masks for ROI-wise analysis'''
label_stack = np.dstack((labels_a, labels_b))
cadidates = np.unique(label_stack.reshape(-1, label_stack.shape[2]), axis=0)
# Remove Zero Entries
cadidates = cadidates[np.prod(cadidates, axis=1) > 0]
return(cadidates)
#export
def iou_mapping(labels_a, labels_b):
'''Compare masks using ROI-wise analysis'''
candidates = get_candidates(labels_a, labels_b)
if candidates.size > 0:
# create a similarity matrix
dim_a = np.max(candidates[:,0])+1
dim_b = np.max(candidates[:,1])+1
similarity_matrix = np.zeros((dim_a, dim_b))
for x,y in candidates:
roi_a = (labels_a == x).astype(np.uint8).flatten()
roi_b = (labels_b == y).astype(np.uint8).flatten()
similarity_matrix[x,y] = 1-jaccard(roi_a, roi_b)
row_ind, col_ind = linear_sum_assignment(-similarity_matrix)
return(similarity_matrix[row_ind,col_ind],
row_ind, col_ind,
np.max(labels_a),
np.max(labels_b)
)
else:
return([],
np.nan, np.nan,
np.max(labels_a),
np.max(labels_b)
)
test_eq(iou_mapping(tst_lbl_a, tst_lbl_a), ([0., 1., 1], [0, 1, 2], [0, 1, 2], 2, 2))
test_eq(iou_mapping(tst_lbl_a, tst_lbl_b), ([0., 1.], [0, 2], [0, 1], 2, 1))
#export
def calculate_roi_measures(*masks, iou_threshold=.5, **kwargs):
"Calculates precision, recall, and f1_score on ROI-level"
labels = [label_mask(m, **kwargs) for m in masks]
matches_iou, _,_, count_a, count_b = iou_mapping(*labels)
matches = np.sum(np.array(matches_iou) > iou_threshold)
precision = matches/count_a
recall = matches/count_b
f1_score = 2 * (precision * recall) / (precision + recall)
return recall, precision, f1_score
test_eq(calculate_roi_measures(mask, mask), (1.0, 1.0, 1.0))
test_eq(calculate_roi_measures(mask, mask, min_pixel=150), (1.0, 1.0, 1.0))
###Output
_____no_output_____
###Markdown
Miscellaneous
###Code
#export
def calc_iterations(n_iter, ds_length, bs):
"Calculate the number of required epochs for 'n_iter' iterations."
iter_per_epoch = ds_length/bs
return int(np.ceil(n_iter/iter_per_epoch))
test_eq(calc_iterations(100, 8, 4), 50)
#export
def get_label_fn(img_path, msk_dir_path):
'Infers suffix from mask name and return label_fn'
msk_path = [x for x in msk_dir_path.iterdir() if x.name.startswith(img_path.stem)]
mask_suffix = msk_path[0].name[len(img_path.stem):]
return lambda o: msk_dir_path/f'{o.stem}{mask_suffix}'
#exports
def save_mask(mask, path, filetype='.png'):
mask = mask.astype(np.uint8) if np.max(mask)>1 else (mask*255).astype(np.uint8)
imageio.imsave(path.with_suffix(filetype), mask)
#exports
def save_unc(unc, path, filetype='.png'):
unc = (unc/unc.max()*255).astype(np.uint8)
imageio.imsave(path.with_suffix(filetype), unc)
#export
#from https://stackoverflow.com/questions/12332975/installing-python-module-within-code
def install_package(package):
subprocess.check_call([sys.executable, "-m", "pip", "install", package])
#export
def import_package(package):
try:
importlib.import_module(package)
except:
print(f'Installing {package}. Please wait.')
install_package(package)
return importlib.import_module(package)
#export
def compose_albumentations(CLAHE_clip_limit=0., brightness_limit=0, contrast_limit=0.):
'Compose albumentations augmentations'
A = import_package('albumentations')
augs = []
if CLAHE_clip_limit>0:
augs.append(A.CLAHE(clip_limit=CLAHE_clip_limit))
if sum([brightness_limit,contrast_limit])>0:
augs.append(A.RandomBrightnessContrast(brightness_limit=brightness_limit, contrast_limit=contrast_limit))
return A.OneOf([*augs], p=0.5)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import *
notebook2script()
###Output
Converted 00_learner.ipynb.
Converted 01_models.ipynb.
Converted 02_data.ipynb.
Converted 02a_transforms.ipynb.
Converted 03_metrics.ipynb.
Converted 04_callbacks.ipynb.
Converted 05_losses.ipynb.
Converted 06_utils.ipynb.
Converted 07_tta.ipynb.
Converted 08_gui.ipynb.
Converted 09_gt.ipynb.
Converted add_information.ipynb.
Converted deepflash2.ipynb.
Converted gt_estimation.ipynb.
Converted index.ipynb.
Converted model_library.ipynb.
Converted predict.ipynb.
Converted train.ipynb.
Converted tutorial.ipynb.
###Markdown
Utility functions> Utility functions for deepflash2
###Code
#hide
from fastcore.test import *
#export
import sys, subprocess, zipfile, imageio, importlib, numpy as np
from pathlib import Path
from scipy import ndimage
from scipy.spatial.distance import jaccard
from scipy.stats import entropy
from skimage.feature import peak_local_max
from skimage.segmentation import clear_border
from skimage.measure import label
from skimage.segmentation import relabel_sequential, watershed
from scipy.optimize import linear_sum_assignment
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Archive Extraction
###Code
#export
def unzip(path, zip_file):
"Unzip and structure archive"
with zipfile.ZipFile(zip_file, 'r') as zf:
f_names = [x for x in zf.namelist() if '__MACOSX' not in x and not x.endswith('/')]
new_root = np.max([len(Path(f).parts) for f in f_names])-2
for f in f_names:
f_path = path / Path(*Path(f).parts[new_root:])
f_path.parent.mkdir(parents=True, exist_ok=True)
data = zf.read(f)
f_path.write_bytes(data)
###Output
_____no_output_____
###Markdown
Ensembling
###Code
#export
def ensemble_results(res_dict, file, std=False):
"Combines single model predictions."
idx = 2 if std else 0
a = [np.array(res_dict[(mod, f)][idx]) for mod, f in res_dict if f==file]
a = np.mean(a, axis=0)
if std:
a = a[...,0]
else:
a = np.argmax(a, axis=-1)
return a
#export
def plot_results(*args, df, model=None, unc_metric=None, figsize=(20, 20), **kwargs):
"Plot images, (masks), predictions and uncertainties side-by-side."
if len(args)==4:
img, msk, pred, pred_std = args
if len(args)==3:
img, pred, pred_std = args
if len(args)==2:
img, pred = args
fig, axs = plt.subplots(nrows=1, ncols=len(args), figsize=figsize, **kwargs)
#One channel fix
if img.ndim == 3 and img.shape[-1] == 1:
img=img[...,0]
axs[0].imshow(img)
axs[0].set_axis_off()
axs[0].set_title(f'File {df.file}')
unc_title = f'Uncertainty \n {unc_metric}: {df[unc_metric]:.3f}' if unc_metric else 'Uncertainty'
pred_title = 'Prediction' if model is None else f'Prediction {model}'
if len(args)==4:
axs[1].imshow(msk)
axs[1].set_axis_off()
axs[1].set_title('Target')
axs[2].imshow(pred)
axs[2].set_axis_off()
axs[2].set_title(f'{pred_title} \n IoU: {df.iou:.2f}')
axs[3].imshow(pred_std)
axs[3].set_axis_off()
axs[3].set_title(unc_title)
elif len(args)==3:
axs[1].imshow(pred)
axs[1].set_axis_off()
axs[1].set_title(pred_title)
axs[2].imshow(pred_std)
axs[2].set_axis_off()
axs[2].set_title(unc_title)
elif len(args)==2:
axs[1].imshow(pred)
axs[1].set_axis_off()
axs[1].set_title(pred_title)
plt.show()
###Output
_____no_output_____
###Markdown
Pixelwise Analysis
###Code
#hide
# Generate an initial random image and mask with two circles
x, y = np.indices((80, 80))
x1, y1, x2, y2 = 28, 28, 44, 52
r1, r2 = 7, 20
mask_circle1 = (x - x1) ** 2 + (y - y1) ** 2 < r1 ** 2
mask_circle2 = (x - x2) ** 2 + (y - y2) ** 2 < r2 ** 2
mask = np.logical_or(mask_circle1, mask_circle2)
empty_mask = np.zeros_like(mask)
fig, axs = plt.subplots(nrows=1, ncols=2)
axs[0].imshow(mask)
axs[0].set_axis_off()
axs[0].set_title('Mask')
axs[1].imshow(empty_mask)
axs[1].set_axis_off()
axs[1].set_title('Empty Mask');
#export
def iou(a,b,threshold=0.5):
'''Computes the Intersection-Over-Union metric.'''
a = np.array(a) > threshold
b = np.array(b) > threshold
overlap = a*b # Logical AND
union = a+b # Logical OR
return np.divide(np.count_nonzero(overlap),np.count_nonzero(union))
test_eq(iou(mask, mask), 1)
test_eq(iou(mask, empty_mask), 0)
###Output
_____no_output_____
###Markdown
ROI-wise Analysis
###Code
#export
def label_mask(mask, threshold=0.5, min_pixel=15, do_watershed=False, exclude_border=False):
'''Analyze regions and return labels'''
if mask.ndim == 3:
mask = np.squeeze(mask, axis=2)
# apply threshold to mask
# bw = closing(mask > threshold, square(2))
bw = (mask > threshold).astype(int)
# label image regions
label_image = label(bw, connectivity=2) # Falk p.13, 8-“connectivity”.
# Watershed: Separates objects in image by generate the markers
# as local maxima of the distance to the background
if do_watershed:
distance = ndimage.distance_transform_edt(bw)
# Minimum number of pixels separating peaks in a region of `2 * min_distance + 1`
# (i.e. peaks are separated by at least `min_distance`)
min_distance = int(np.ceil(np.sqrt(min_pixel / np.pi)))
local_maxi = peak_local_max(distance, indices=False, exclude_border=False,
min_distance=min_distance, labels=label_image)
markers = label(local_maxi)
label_image = watershed(-distance, markers, mask=bw)
# remove artifacts connected to image border
if exclude_border:
label_image = clear_border(label_image)
# remove areas < min pixel
unique, counts = np.unique(label_image, return_counts=True)
label_image[np.isin(label_image, unique[counts<min_pixel])] = 0
# re-label image
label_image, _ , _ = relabel_sequential(label_image, offset=1)
return (label_image)
tst_lbl_a = label_mask(mask, min_pixel=0)
test_eq(tst_lbl_a.max(), 2)
test_eq(tst_lbl_a.min(), 0)
plt.imshow(tst_lbl_a);
tst_lbl_b = label_mask(mask, min_pixel=150)
test_eq(tst_lbl_b.max(), 1)
plt.imshow(tst_lbl_b);
#export
def get_candidates(labels_a, labels_b):
'''Get candiate masks for ROI-wise analysis'''
label_stack = np.dstack((labels_a, labels_b))
cadidates = np.unique(label_stack.reshape(-1, label_stack.shape[2]), axis=0)
# Remove Zero Entries
cadidates = cadidates[np.prod(cadidates, axis=1) > 0]
return(cadidates)
#export
def iou_mapping(labels_a, labels_b):
'''Compare masks using ROI-wise analysis'''
candidates = get_candidates(labels_a, labels_b)
if candidates.size > 0:
# create a similarity matrix
dim_a = np.max(candidates[:,0])+1
dim_b = np.max(candidates[:,1])+1
similarity_matrix = np.zeros((dim_a, dim_b))
for x,y in candidates:
roi_a = (labels_a == x).astype(np.uint8).flatten()
roi_b = (labels_b == y).astype(np.uint8).flatten()
similarity_matrix[x,y] = 1-jaccard(roi_a, roi_b)
row_ind, col_ind = linear_sum_assignment(-similarity_matrix)
return(similarity_matrix[row_ind,col_ind],
row_ind, col_ind,
np.max(labels_a),
np.max(labels_b)
)
else:
return([],
np.nan, np.nan,
np.max(labels_a),
np.max(labels_b)
)
test_eq(iou_mapping(tst_lbl_a, tst_lbl_a), ([0., 1., 1], [0, 1, 2], [0, 1, 2], 2, 2))
test_eq(iou_mapping(tst_lbl_a, tst_lbl_b), ([0., 1.], [0, 2], [0, 1], 2, 1))
#export
def calculate_roi_measures(*masks, iou_threshold=.5, **kwargs):
"Calculates precision, recall, and f1_score on ROI-level"
labels = [label_mask(m, **kwargs) for m in masks]
matches_iou, _,_, count_a, count_b = iou_mapping(*labels)
matches = np.sum(np.array(matches_iou) > iou_threshold)
precision = matches/count_a
recall = matches/count_b
f1_score = 2 * (precision * recall) / (precision + recall)
return recall, precision, f1_score
test_eq(calculate_roi_measures(mask, mask), (1.0, 1.0, 1.0))
test_eq(calculate_roi_measures(mask, mask, min_pixel=150), (1.0, 1.0, 1.0))
###Output
_____no_output_____
###Markdown
Miscellaneous
###Code
#export
def calc_iterations(n_iter, ds_length, bs):
"Calculate the number of required epochs for 'n_iter' iterations."
iter_per_epoch = ds_length/bs
return int(np.ceil(n_iter/iter_per_epoch))
test_eq(calc_iterations(100, 8, 4), 50)
#export
def get_label_fn(img_path, msk_dir_path):
'Infers suffix from mask name and return label_fn'
msk_path = [x for x in msk_dir_path.iterdir() if x.name.startswith(img_path.stem)]
mask_suffix = msk_path[0].name[len(img_path.stem):]
return lambda o: msk_dir_path/f'{o.stem}{mask_suffix}'
#exports
def save_mask(mask, path, filetype='.png'):
mask = mask.astype(np.uint8) if np.max(mask)>1 else (mask*255).astype(np.uint8)
imageio.imsave(path.with_suffix(filetype), mask)
#exports
def save_unc(unc, path, filetype='.png'):
unc = (unc/unc.max()*255).astype(np.uint8)
imageio.imsave(path.with_suffix(filetype), unc)
#export
#from https://stackoverflow.com/questions/12332975/installing-python-module-within-code
def install_package(package):
subprocess.check_call([sys.executable, "-m", "pip", "install", package])
#export
def import_package(package):
try:
importlib.import_module(package)
except:
print(f'Installing {package}. Please wait.')
install_package("package")
return importlib.import_module(package)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import *
notebook2script()
###Output
Converted 00_learner.ipynb.
Converted 01_models.ipynb.
Converted 02_data.ipynb.
Converted 03_metrics.ipynb.
Converted 04_callbacks.ipynb.
Converted 05_losses.ipynb.
Converted 06_utils.ipynb.
Converted 07_tta.ipynb.
Converted 08_gui.ipynb.
Converted 09_gt.ipynb.
Converted add_information.ipynb.
Converted deepflash2.ipynb.
Converted gt_estimation.ipynb.
Converted index.ipynb.
Converted model_library.ipynb.
Converted predict.ipynb.
Converted train.ipynb.
Converted tutorial.ipynb.
###Markdown
Utility functions> Utility functions for deepflash2
###Code
#hide
from fastcore.test import *
#export
import sys, subprocess, zipfile, imageio, importlib, skimage, zipfile, os, cv2
import numpy as np, pandas as pd
from pathlib import Path
from scipy import ndimage
from scipy.spatial.distance import jaccard
from skimage.feature import peak_local_max
from skimage.segmentation import clear_border
from skimage.measure import label
from skimage.segmentation import relabel_sequential, watershed
from scipy.optimize import linear_sum_assignment
import matplotlib.pyplot as plt
import albumentations as A
###Output
_____no_output_____
###Markdown
Archive Extraction
###Code
#export
def unzip(path, zip_file):
"Unzip and structure archive"
with zipfile.ZipFile(zip_file, 'r') as zf:
f_names = [x for x in zf.namelist() if '__MACOSX' not in x and not x.endswith('/')]
new_root = np.max([len(Path(f).parts) for f in f_names])-2
for f in f_names:
f_path = path / Path(*Path(f).parts[new_root:])
f_path.parent.mkdir(parents=True, exist_ok=True)
data = zf.read(f)
f_path.write_bytes(data)
###Output
_____no_output_____
###Markdown
Install packages on demand
###Code
#export
#from https://stackoverflow.com/questions/12332975/installing-python-module-within-code
def install_package(package):
subprocess.check_call([sys.executable, "-m", "pip", "install", package])
#export
def import_package(package):
try:
importlib.import_module(package)
except:
print(f'Installing {package}. Please wait.')
install_package(package)
return importlib.import_module(package)
#export
def compose_albumentations(gamma_limit_lower=0, gamma_limit_upper=0, CLAHE_clip_limit=0., brightness_limit=0, contrast_limit=0., distort_limit=0.):
'Compose albumentations augmentations'
augs = []
if sum([gamma_limit_lower,gamma_limit_upper])>0:
augs.append(A.RandomGamma(gamma_limit=(gamma_limit_lower, gamma_limit_upper), p=0.5))
if CLAHE_clip_limit>0:
augs.append(A.CLAHE(clip_limit=CLAHE_clip_limit))
if sum([brightness_limit,contrast_limit])>0:
augs.append(A.RandomBrightnessContrast(brightness_limit=brightness_limit, contrast_limit=contrast_limit))
if distort_limit>0:
augs.append(A.GridDistortion(num_steps=5, distort_limit=distort_limit, interpolation=1, border_mode=4, p=0.5))
return augs
###Output
_____no_output_____
###Markdown
Ensembling
###Code
#export
def ensemble_results(res_dict, file, std=False):
"Combines single model predictions."
idx = 2 if std else 0
a = [np.array(res_dict[(mod, f)][idx]) for mod, f in res_dict if f==file]
a = np.mean(a, axis=0)
if std:
a = a[...,0]
else:
a = np.argmax(a, axis=-1)
return a
#export
def plot_results(*args, df, hastarget=False, model=None, unc_metric=None, figsize=(20, 20), **kwargs):
"Plot images, (masks), predictions and uncertainties side-by-side."
if len(args)==4:
img, msk, pred, pred_std = args
elif len(args)==3 and not hastarget:
img, pred, pred_std = args
elif len(args)==3:
img, msk, pred = args
elif len(args)==2:
img, pred = args
else: raise NotImplementedError
fig, axs = plt.subplots(nrows=1, ncols=len(args), figsize=figsize, **kwargs)
#One channel fix
if img.ndim == 3 and img.shape[-1] == 1:
img=img[...,0]
axs[0].imshow(img)
axs[0].set_axis_off()
axs[0].set_title(f'File {df.file}')
unc_title = f'Uncertainty \n {unc_metric}: {df[unc_metric]:.3f}' if unc_metric else 'Uncertainty'
pred_title = 'Prediction' if model is None else f'Prediction {model}'
if len(args)==4:
axs[1].imshow(msk)
axs[1].set_axis_off()
axs[1].set_title('Target')
axs[2].imshow(pred)
axs[2].set_axis_off()
axs[2].set_title(f'{pred_title} \n IoU: {df.iou:.2f}')
axs[3].imshow(pred_std)
axs[3].set_axis_off()
axs[3].set_title(unc_title)
elif len(args)==3 and not hastarget:
axs[1].imshow(pred)
axs[1].set_axis_off()
axs[1].set_title(pred_title)
axs[2].imshow(pred_std)
axs[2].set_axis_off()
axs[2].set_title(unc_title)
elif len(args)==3:
axs[1].imshow(msk)
axs[1].set_axis_off()
axs[1].set_title('Target')
axs[2].imshow(pred)
axs[2].set_axis_off()
axs[2].set_title(f'{pred_title} \n IoU: {df.iou:.2f}')
elif len(args)==2:
axs[1].imshow(pred)
axs[1].set_axis_off()
axs[1].set_title(pred_title)
plt.show()
###Output
_____no_output_____
###Markdown
Pixelwise Analysis
###Code
#hide
# Generate an initial random image and mask with two circles
x, y = np.indices((80, 80))
x1, y1, x2, y2 = 28, 28, 44, 52
r1, r2 = 7, 20
mask_circle1 = (x - x1) ** 2 + (y - y1) ** 2 < r1 ** 2
mask_circle2 = (x - x2) ** 2 + (y - y2) ** 2 < r2 ** 2
mask = np.logical_or(mask_circle1, mask_circle2)
empty_mask = np.zeros_like(mask)
fig, axs = plt.subplots(nrows=1, ncols=2)
axs[0].imshow(mask)
axs[0].set_axis_off()
axs[0].set_title('Mask')
axs[1].imshow(empty_mask)
axs[1].set_axis_off()
axs[1].set_title('Empty Mask');
#export
def iou(a,b,threshold=0.5):
'''Computes the Intersection-Over-Union metric.'''
a = np.array(a) > threshold
b = np.array(b) > threshold
overlap = a*b # Logical AND
union = a+b # Logical OR
return np.divide(np.count_nonzero(overlap),np.count_nonzero(union))
test_eq(iou(mask, mask), 1)
test_eq(iou(mask, empty_mask), 0)
###Output
_____no_output_____
###Markdown
ROI-wise Analysis
###Code
#export
def label_mask(mask, threshold=0.5, min_pixel=15, do_watershed=False, exclude_border=False):
'''Analyze regions and return labels'''
if mask.ndim == 3:
mask = np.squeeze(mask, axis=2)
# apply threshold to mask
# bw = closing(mask > threshold, square(2))
bw = (mask > threshold).astype(int)
# label image regions
label_image = label(bw, connectivity=2) # Falk p.13, 8-“connectivity”.
# Watershed: Separates objects in image by generate the markers
# as local maxima of the distance to the background
if do_watershed:
distance = ndimage.distance_transform_edt(bw)
# Minimum number of pixels separating peaks in a region of `2 * min_distance + 1`
# (i.e. peaks are separated by at least `min_distance`)
min_distance = int(np.ceil(np.sqrt(min_pixel / np.pi)))
local_maxi = peak_local_max(distance, indices=False, exclude_border=False,
min_distance=min_distance, labels=label_image)
markers = label(local_maxi)
label_image = watershed(-distance, markers, mask=bw)
# remove artifacts connected to image border
if exclude_border:
label_image = clear_border(label_image)
# remove areas < min pixel
unique, counts = np.unique(label_image, return_counts=True)
label_image[np.isin(label_image, unique[counts<min_pixel])] = 0
# re-label image
label_image, _ , _ = relabel_sequential(label_image, offset=1)
return (label_image)
tst_lbl_a = label_mask(mask, min_pixel=0)
test_eq(tst_lbl_a.max(), 2)
test_eq(tst_lbl_a.min(), 0)
plt.imshow(tst_lbl_a);
tst_lbl_b = label_mask(mask, min_pixel=150)
test_eq(tst_lbl_b.max(), 1)
plt.imshow(tst_lbl_b);
#export
def get_candidates(labels_a, labels_b):
'''Get candiate masks for ROI-wise analysis'''
label_stack = np.dstack((labels_a, labels_b))
cadidates = np.unique(label_stack.reshape(-1, label_stack.shape[2]), axis=0)
# Remove Zero Entries
cadidates = cadidates[np.prod(cadidates, axis=1) > 0]
return(cadidates)
#export
def iou_mapping(labels_a, labels_b):
'''Compare masks using ROI-wise analysis'''
candidates = get_candidates(labels_a, labels_b)
if candidates.size > 0:
# create a similarity matrix
dim_a = np.max(candidates[:,0])+1
dim_b = np.max(candidates[:,1])+1
similarity_matrix = np.zeros((dim_a, dim_b))
for x,y in candidates:
roi_a = (labels_a == x).astype(np.uint8).flatten()
roi_b = (labels_b == y).astype(np.uint8).flatten()
similarity_matrix[x,y] = 1-jaccard(roi_a, roi_b)
row_ind, col_ind = linear_sum_assignment(-similarity_matrix)
return(similarity_matrix[row_ind,col_ind],
row_ind, col_ind,
np.max(labels_a),
np.max(labels_b)
)
else:
return([],
np.nan, np.nan,
np.max(labels_a),
np.max(labels_b)
)
test_eq(iou_mapping(tst_lbl_a, tst_lbl_a), ([0., 1., 1], [0, 1, 2], [0, 1, 2], 2, 2))
test_eq(iou_mapping(tst_lbl_a, tst_lbl_b), ([0., 1.], [0, 2], [0, 1], 2, 1))
#export
def calculate_roi_measures(*masks, iou_threshold=.5, **kwargs):
"Calculates precision, recall, and f1_score on ROI-level"
labels = [label_mask(m, **kwargs) for m in masks]
matches_iou, _,_, count_a, count_b = iou_mapping(*labels)
matches = np.sum(np.array(matches_iou) > iou_threshold)
precision = matches/count_a
recall = matches/count_b
f1_score = 2 * (precision * recall) / (precision + recall)
return recall, precision, f1_score
test_eq(calculate_roi_measures(mask, mask), (1.0, 1.0, 1.0))
test_eq(calculate_roi_measures(mask, mask, min_pixel=150), (1.0, 1.0, 1.0))
###Output
_____no_output_____
###Markdown
ROI Export to ImageJ
###Code
#export
def export_roi_set(mask, intensity_image, name='RoiSet', path=Path('.'), ascending=True, min_pixel=0):
"EXPERIMENTAL: Export mask regions to imageJ ROI Set"
roifile = import_package('roifile')
_, comps = cv2.connectedComponents(mask.astype('uint8'), connectivity=4)
props = skimage.measure.regionprops_table(comps, intensity_image, properties=('area', 'coords', 'mean_intensity'))
df_props = pd.DataFrame(props)
df_props = df_props[df_props.area>min_pixel].sort_values('mean_intensity', ascending=ascending).reset_index()
i = 1
with zipfile.ZipFile(path/f'{name}.zip', mode='w') as myzip:
for _, row in df_props.iterrows():
contours = skimage.measure.find_contours(comps==row['index']+1, level=0.5, fully_connected='low')
for cont in contours:
roi_name = f'{i:04d}-{row.mean_intensity:3f}.roi'
points = np.array([cont[:,1]+0.5, cont[:,0]+0.5]).T
roi = roifile.ImagejRoi.frompoints(points)
roi.tofile(roi_name)
myzip.write(roi_name)
os.remove(roi_name)
i += 1
# EXPERIMENTAL, needs more testing
export_roi_set(mask, mask)
###Output
_____no_output_____
###Markdown
Miscellaneous
###Code
#export
def calc_iterations(n_iter, ds_length, bs):
"Calculate the number of required epochs for 'n_iter' iterations."
iter_per_epoch = ds_length/bs
return int(np.ceil(n_iter/iter_per_epoch))
test_eq(calc_iterations(100, 8, 4), 50)
#export
def get_label_fn(img_path, msk_dir_path):
'Infers suffix from mask name and return label_fn'
msk_path = [x for x in msk_dir_path.iterdir() if x.name.startswith(img_path.stem)]
mask_suffix = msk_path[0].name[len(img_path.stem):]
return lambda o: msk_dir_path/f'{o.stem}{mask_suffix}'
#exports
def save_mask(mask, path, filetype='.png'):
mask = mask.astype(np.uint8) if np.max(mask)>1 else (mask*255).astype(np.uint8)
imageio.imsave(path.with_suffix(filetype), mask)
#exports
def save_unc(unc, path, filetype='.png'):
unc = (unc/unc.max()*255).astype(np.uint8)
imageio.imsave(path.with_suffix(filetype), unc)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import *
notebook2script()
###Output
Converted 00_learner.ipynb.
Converted 01_models.ipynb.
Converted 02_data.ipynb.
Converted 03_metrics.ipynb.
Converted 05_losses.ipynb.
Converted 06_utils.ipynb.
Converted 07_tta.ipynb.
Converted 08_gui.ipynb.
Converted 09_gt.ipynb.
Converted add_information.ipynb.
Converted deepflash2.ipynb.
Converted gt_estimation.ipynb.
Converted index.ipynb.
Converted model_library.ipynb.
Converted predict.ipynb.
Converted train.ipynb.
Converted tutorial.ipynb.
|
reports/60_compare-anc-2021-11-01.ipynb | ###Markdown
Load and process data
###Code
# read ancestry data
train, test = load_train_test(f"../data/raw/records25k_data_train.csv", f"../data/raw/records25k_data_test.csv")
input_names_train, weighted_actual_names_train, candidate_names_train = train
input_names_test, weighted_actual_names_test, candidate_names_test = test
candidate_names_all = np.concatenate((candidate_names_train, candidate_names_test))
input_names_all = input_names_train + input_names_test
weighted_actual_names_all = weighted_actual_names_train + weighted_actual_names_test
###Output
_____no_output_____
###Markdown
Model
###Code
# various coders
caverphone_one = CaverphoneOne()
caverphone_two = CaverphoneTwo()
refined_soundex = RefinedSoundex()
# tfidf
tfidf_vectorizer = TfidfVectorizer(ngram_range=(1, 3), analyzer="char_wb", min_df=10, max_df=0.5)
tfidf_X_train = tfidf_vectorizer.fit_transform(candidate_names_train)
tfidf_X_test = tfidf_vectorizer.transform(candidate_names_test)
tfidf_X_all = vstack((tfidf_X_train, tfidf_X_test))
# autoencoder with triplet loss
triplet_model = torch.load("../data/models/anc-triplet-bilstm-100-512-40-05.pth")
# move to cpu for evaluation so we don't run out of GPU memory
triplet_model.to("cpu")
triplet_model.device = "cpu"
SimilarityAlgo = namedtuple("SimilarityAlgo", "name min_threshold max_threshold distances")
similarity_algos = [
SimilarityAlgo("tfidf", 0.5, 1.0, False),
SimilarityAlgo("levenshtein", 0.5, 1.0, False),
SimilarityAlgo("damerau_levenshtein", 0.5, 1.0, False),
SimilarityAlgo("jaro_winkler", 0.5, 1.0, False),
SimilarityAlgo("triplet", 0.01, 1.0, True),
]
coding_algos = [
"soundex",
"nysiis",
"metaphone",
"caverphone1",
"caverphone2",
"refined_soundex",
"double_metaphone",
"cologne_phonetics",
"match_rating",
]
def calc_similarity_to(name, algo="levenshtein"):
name = remove_padding(name)
def calc_similarity(row):
cand_name = remove_padding(row[0])
similarity = 0.0
if algo == "levenshtein":
dist = jellyfish.levenshtein_distance(name, cand_name)
similarity = 1 - (dist / max(len(name), len(cand_name)))
elif algo == "damerau_levenshtein":
dist = jellyfish.damerau_levenshtein_distance(name, cand_name)
similarity = 1 - (dist / max(len(name), len(cand_name)))
elif algo == "jaro_winkler":
similarity = jellyfish.jaro_winkler_similarity(name, cand_name)
elif algo == "caverphone1":
similarity = 1.0 if caverphone_one._pre_process(name) == caverphone_one._pre_process(cand_name) else 0.0
elif algo == "caverphone2":
similarity = 1.0 if caverphone_two._pre_process(name) == caverphone_two._pre_process(cand_name) else 0.0
elif algo == "refined_soundex":
similarity = 1.0 if refined_soundex.phonetics(name) == refined_soundex.phonetics(cand_name) else 0.0
elif algo == "double_metaphone":
dm1 = doublemetaphone(name)
dm2 = doublemetaphone(cand_name)
similarity = 1.0 if any(code in dm2 for code in dm1) else 0.0
elif algo == "cologne_phonetics":
similarity = 1.0 if cologne_phonetics.encode(name)[0][1] == cologne_phonetics.encode(cand_name)[0][1] else 0.0
elif algo == "match_rating":
similarity = 1.0 if jellyfish.match_rating_comparison(name, cand_name) else 0.0
elif algo == "soundex":
similarity = 1.0 if jellyfish.soundex(name) == jellyfish.soundex(cand_name) else 0.0
elif algo == "nysiis":
similarity = 1.0 if jellyfish.nysiis(name) == jellyfish.nysiis(cand_name) else 0.0
elif algo == "metaphone":
similarity = 1.0 if jellyfish.metaphone(name) == jellyfish.metaphone(cand_name) else 0.0
return similarity
return calc_similarity
# test double metaphone
name = "smith"
cand_name = "schmidt"
dm1 = doublemetaphone(name)
dm2 = doublemetaphone(cand_name)
similarity = 1.0 if any(code in dm2 for code in dm1) else 0.0
print("dm1", dm1)
print("dm2", dm2)
print("similarity", similarity)
###Output
_____no_output_____
###Markdown
Similarity Function
###Code
def get_similars(shared, name=''):
candidate_names_all, k, algo, tfidf_vectorizer, tfidf_X_all = shared
if algo == "tfidf":
x = tfidf_vectorizer.transform([name]).toarray()
scores = safe_sparse_dot(tfidf_X_all, x.T).flatten()
else:
scores = np.apply_along_axis(calc_similarity_to(name, algo), 1, candidate_names_all[:, None])
sorted_scores_idx = np.argsort(scores)[::-1][:k]
candidate_names = candidate_names_all[sorted_scores_idx]
candidate_scores = scores[sorted_scores_idx]
return list(zip(candidate_names, candidate_scores))
###Output
_____no_output_____
###Markdown
Demo
###Code
# get_similars('schumacher', 10, 'jaro_winkler', True)
get_similars((candidate_names_all, 10, "levenshtein", None, None), "<bostelman>")
###Output
_____no_output_____
###Markdown
Test tfidf
###Code
get_similars((candidate_names_all, 10, "tfidf", tfidf_vectorizer, tfidf_X_all), "<schumacher>")
###Output
_____no_output_____
###Markdown
Test levenshtein
###Code
input_names_test[251]
weighted_actual_names_test[251]
k = 100 # Number of candidates to consider
similar_names_scores = [get_similars((candidate_names_all, k, "levenshtein", None, None), input_names_test[251])]
similar_names_scores[0][:5]
# Ugh - how can I create a 3D array with (str, float) as the third axis without taking apart and re-assembling the array?
# names is a 2D array axis 0 = names, axis 1 = name of k similar-names
names = np.array(list(list(cell[0] for cell in row) for row in similar_names_scores), dtype="O")
# scores is a 2D array axis 0 = names, axis 1 = score of k similar-names
scores = np.array(list(list(cell[1] for cell in row) for row in similar_names_scores), dtype="f8")
# similar_names is now a 3D array axis 0 = names, axis 1 = k similar-names, axis 2 = name or score
similar_names_scores = np.dstack((names, scores))
metrics.weighted_recall_at_threshold(weighted_actual_names_test[251], similar_names_scores[0], 0.85)
metrics.weighted_recall_at_threshold(weighted_actual_names_test[251], similar_names_scores[0], 0.75)
###Output
_____no_output_____
###Markdown
Test Soundex
###Code
k = 1000 # Number of candidates to consider
similar_names_scores = [get_similars((candidate_names_all, k, "soundex", None, None), input_names_test[251])]
similar_names_scores[0][:5]
# Ugh - how can I create a 3D array with (str, float) as the third axis without taking apart and re-assembling the array?
# names is a 2D array axis 0 = names, axis 1 = name of k similar-names
names = np.array(list(list(cell[0] for cell in row) for row in similar_names_scores), dtype="O")
# scores is a 2D array axis 0 = names, axis 1 = score of k similar-names
scores = np.array(list(list(cell[1] for cell in row) for row in similar_names_scores), dtype="f8")
# similar_names is now a 3D array axis 0 = names, axis 1 = k similar-names, axis 2 = name or score
similar_names_scores = np.dstack((names, scores))
metrics.weighted_recall_at_threshold(weighted_actual_names_test[251], similar_names_scores[0], 0.5)
metrics.precision_at_threshold(weighted_actual_names_test[251], similar_names_scores[0], 0.5)
###Output
_____no_output_____
###Markdown
Evaluate each algorithm
###Code
def triplet_eval(triplet_model, input_names, candidate_names_all, k):
MAX_NAME_LENGTH = 30
char_to_idx_map, idx_to_char_map = build_token_idx_maps()
# Get embeddings for input names
input_names_X, _ = convert_names_to_model_inputs(input_names, char_to_idx_map, MAX_NAME_LENGTH)
input_names_encoded = triplet_model(input_names_X, just_encoder=True).detach().numpy()
# Get embeddings for candidate names
candidate_names_all_X, _ = convert_names_to_model_inputs(
candidate_names_all, char_to_idx_map, MAX_NAME_LENGTH
)
candidate_names_all_encoded = triplet_model(candidate_names_all_X, just_encoder=True).detach().numpy()
return get_best_matches(
input_names_encoded, candidate_names_all_encoded, candidate_names_all, num_candidates=k, metric="euclidean"
)
k = 1000 # Number of candidates to consider
actual_names_all = [[name for name, _, _ in name_weights] for name_weights in weighted_actual_names_all]
figure, ax = plt.subplots(1, 1, figsize=(20, 15))
ax.set_title("PR at threshold")
colors = cm.rainbow(np.linspace(0, 1, len(similarity_algos)))
# TODO use input_names_test and weighted_Actual_names_test
input_names_sample = input_names_all
for algo, color in zip(similarity_algos, colors):
print(algo.name)
if algo.name == "triplet":
similar_names_scores = triplet_eval(triplet_model, input_names_sample, candidate_names_all, k)
else:
with WorkerPool(shared_objects=(candidate_names_all, k, algo.name, tfidf_vectorizer, tfidf_X_all)) as pool:
similar_names_scores = pool.map(get_similars, input_names_sample, progress_bar=True)
similar_names = [[name for name, _ in name_similarities] for name_similarities in similar_names_scores]
names = np.array(list(list(cell[0] for cell in row) for row in similar_names_scores), dtype="O")
scores = np.array(list(list(cell[1] for cell in row) for row in similar_names_scores), dtype="f8")
similar_names_scores = np.dstack((names, scores))
precisions, recalls = metrics.precision_weighted_recall_at_threshold(
weighted_actual_names_all, similar_names_scores,
min_threshold=algo.min_threshold, max_threshold=algo.max_threshold, distances=algo.distances
)
ax.plot(recalls, precisions, "o--", color=color, label=algo.name)
ax.legend()
plt.show()
k = 1000 # Number of candidates to consider
actual_names_all = [[name for name, _, _ in name_weights] for name_weights in weighted_actual_names_all]
figure, ax = plt.subplots(1, 1, figsize=(20, 15))
ax.set_title("PR at threshold")
colors = cm.rainbow(np.linspace(0, 1, len(coding_algos)+2))
input_names_sample = input_names_all
# plot anc-triplet-bilstm-100-512-40-05 model
ax.plot([.809], [.664], "o--", color=colors[0], label="triplet-cluster")
ax.plot([.594], [.543], "o--", color=colors[1], label="dam-lev-cluster")
for algo, color in zip(coding_algos, colors[2:]):
print(algo)
# similar_names_scores = list(map(lambda x: get_similars(x, k=k, algo=algo), tqdm(input_names_all)))
with WorkerPool(shared_objects=(candidate_names_all, k, algo, tfidf_vectorizer, tfidf_X_all)) as pool:
similar_names_scores = pool.map(get_similars, input_names_sample, progress_bar=True)
similar_names = [[name for name, _ in name_similarities] for name_similarities in similar_names_scores]
names = np.array(list(list(cell[0] for cell in row) for row in similar_names_scores), dtype="O")
scores = np.array(list(list(cell[1] for cell in row) for row in similar_names_scores), dtype="f8")
similar_names_scores = np.dstack((names, scores))
precision = metrics.avg_precision_at_threshold(weighted_actual_names_all, similar_names_scores, 0.5)
recall = metrics.avg_weighted_recall_at_threshold(weighted_actual_names_all, similar_names_scores, 0.5)
print(f"precision={precision} recall={recall}")
precisions = [precision]
recalls = [recall]
ax.plot(recalls, precisions, "o--", color=color, label=algo)
ax.legend()
plt.show()
###Output
_____no_output_____ |
Python/CNN.ipynb | ###Markdown
Packages Download and Installation
###Code
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D, Reshape, Dropout
###Output
_____no_output_____
###Markdown
Data Preprocessing
###Code
# load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# one hot encode outputs
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
X_train = X_train.astype('float32')
X_train /= 255.
X_test = X_test.astype('float32')
X_test /= 255.
# Input image dimensions
img_width = X_train.shape[1]
img_height = X_train.shape[2]
# reshape input data
X_train = X_train.reshape( X_train.shape[0], img_width, img_height, 1)
X_test = X_test.reshape( X_test.shape[0], img_width, img_height, 1)
# Define a few parameters to be used in the CNN model
batch_size = 128
num_classes = y_train.shape[1]
dense_layer_size = 128
# build model
model = Sequential()
model.add(Conv2D(32,
# kernal size
(3, 3),
input_shape=(img_width, img_height, 1),
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64,
# kernal size
(3, 3),
activation='relu'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs = 10)
###Output
_____no_output_____ |
cross_platform/Jypyter_notebooks/sentiment_sentisead.ipynb | ###Markdown
Macbook startup codes
###Code
# import nltk
# nltk.download('averaged_perceptron_tagger')
from __future__ import division
%load_ext autoreload
%autoreload 2
from django.conf import settings
import cPickle as pickle
import csv
import os
import sys
import numpy as np
import pandas as pd
# sys.path.append('c:\dev\opinion\opinion\python\opinion')
import utils.fileutils as fileutils
import utils.metrics as metrics
import nltk
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Initial variables setup
###Code
basedir = "/Users/alamin/research/sentiment/sentisead/"
codedir = os.path.join(rootdir, "code")
datadir = os.path.join(rootdir, "data")
if rootdir not in sys.path:
sys.path.append(rootdir)
if codedir not in sys.path:
sys.path.append(codedir)
os.chdir(codedir)
infile_org = os.path.join(datadir, "ResultsConsolidatedWithEnsembleAssessment.xlsx")
infile_disa = os.path.join(datadir, "alamin", "Disa_ResultsConsolidatedWithEnsembleAssessment.xlsx")
infile_org_name = "ResultsConsolidatedWithEnsembleAssessment.xlsx"
infile_disa_name = "Disa_ResultsConsolidatedWithEnsembleAssessment.xlsx"
# outfile = os.path.join(datadir, "alamin_consolidated.xlsx")
!pwd
###Output
/Users/alamin/research/sentiment/sentisead/code
###Markdown
Running Sentisead
###Code
import Features as feats
import SentiseadTenFold as seadTen
prep = feats.PrepareFeatureValuesForClassification()
prep.computeAndEncodeDiversityScores(infile, outfile)
algo = "RF"
learnerCol = "Senti4SD"
sead = seadTen.Sentisead()
sead.computePerformanceOverallOfLearner(algo, learnerCol)
import DetectAllWrong as daw
def isAllWrong(truth, sent4sd, senticr, sentise):
if truth not in [senti4sd, senticr, sentise]:
return 1
else:
return 0
labels = []
df = pd.read_excel(infile, encoding="ISO-8859-1")
for index, row in df.iterrows():
truth = row["ManualLabel"]
dso = row["DsoLabelFullText"]
senti4sd = row["Senti4SD"]
senticr = row["SentiCR"]
sentise = row["SentistrengthSE"]
iswrong = isAllWrong(truth, senti4sd, senticr, sentise)
labels.append(iswrong)
df["AllWrong"] = (pd.Series(labels)).values
df.to_excel(outfile, encoding="ISO-8859-1", index=False)
algo = "RF"
clf = daw.DetectorAllWrong()
clf.consolidateResults(algo)
import DSOSE as dso
import SentiseadTenFold as seadTen
sead = seadTen.Sentisead(rootdir)
# sead.pipeline(algo="RF", ngram=3)
sead.computePerformancOfLearner(learnerCol="Senti4SD")
sead = seadTen.Sentisead(rootdir)
sead.setInfile("Disa_ResultsConsolidatedWithEnsembleAssessment.xlsx")
sead.computePerformancOfLearner(learnerCol="BERT4SentiSE")
###Output
Sentisead constructed
infile is /Users/alamin/research/sentiment/sentisead/data/Disa_ResultsConsolidatedWithEnsembleAssessment.xlsx
Disa added
Alamin performance method
File DatasetLinJIRA. Label = p. Precision = 0.28. Recall = 0.54. F1 = 0.37
File DatasetLinJIRA. Label = n. Precision = 0.70. Recall = 0.44. F1 = 0.54
File = DatasetLinJIRA. F1 Macro = 0.49. Micro = 0.47
-------------------------------
File BenchmarkUddinSO. Label = p. Precision = 0.24. Recall = 0.24. F1 = 0.24
File BenchmarkUddinSO. Label = o. Precision = 0.59. Recall = 0.51. F1 = 0.55
File BenchmarkUddinSO. Label = n. Precision = 0.19. Recall = 0.27. F1 = 0.22
File = BenchmarkUddinSO. F1 Macro = 0.34. Micro = 0.40
-------------------------------
File DatasetLinAppReviews. Label = p. Precision = 0.57. Recall = 0.28. F1 = 0.37
File DatasetLinAppReviews. Label = o. Precision = 0.08. Recall = 0.68. F1 = 0.15
File DatasetLinAppReviews. Label = n. Precision = 0.31. Recall = 0.12. F1 = 0.17
File = DatasetLinAppReviews. F1 Macro = 0.34. Micro = 0.25
-------------------------------
File DatasetLinSO. Label = p. Precision = 0.75. Recall = 0.37. F1 = 0.50
File DatasetLinSO. Label = o. Precision = 0.91. Recall = 0.95. F1 = 0.93
File DatasetLinSO. Label = n. Precision = 0.68. Recall = 0.76. F1 = 0.72
File = DatasetLinSO. F1 Macro = 0.73. Micro = 0.87
-------------------------------
File DatasetSenti4SDSO. Label = p. Precision = 0.94. Recall = 0.93. F1 = 0.93
File DatasetSenti4SDSO. Label = o. Precision = 0.87. Recall = 0.85. F1 = 0.86
File DatasetSenti4SDSO. Label = n. Precision = 0.85. Recall = 0.87. F1 = 0.86
File = DatasetSenti4SDSO. F1 Macro = 0.88. Micro = 0.88
-------------------------------
File OrtuJIRA. Label = p. Precision = 0.79. Recall = 0.83. F1 = 0.81
File OrtuJIRA. Label = o. Precision = 0.90. Recall = 0.91. F1 = 0.90
File OrtuJIRA. Label = n. Precision = 0.81. Recall = 0.74. F1 = 0.77
File = OrtuJIRA. F1 Macro = 0.83. Micro = 0.87
-------------------------------
###Markdown
Understanding
###Code
sead = seadTen.Sentisead(basedir=rootdir, infile_name=infile_name)
sead.pipeline(algo="RF", ngram=1)
###Output
About to create train test files
About to call getVectorizer
About to call trainTestSupervisedDetector
method trainTestSentiCRCustomized output dir is created at: /Users/alamin/research/sentiment/sentisead/output/Sentisead_RF
('DatasetLinJIRA', 0)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinJIRA_Train_0_model.pkl')
('DatasetLinJIRA', 1)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinJIRA_Train_1_model.pkl')
('DatasetLinJIRA', 2)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinJIRA_Train_2_model.pkl')
('DatasetLinJIRA', 3)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinJIRA_Train_3_model.pkl')
('DatasetLinJIRA', 4)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinJIRA_Train_4_model.pkl')
('DatasetLinJIRA', 5)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinJIRA_Train_5_model.pkl')
('DatasetLinJIRA', 6)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinJIRA_Train_6_model.pkl')
('DatasetLinJIRA', 7)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinJIRA_Train_7_model.pkl')
('DatasetLinJIRA', 8)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinJIRA_Train_8_model.pkl')
('DatasetLinJIRA', 9)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinJIRA_Train_9_model.pkl')
('BenchmarkUddinSO', 0)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/BenchmarkUddinSO_Train_0_model.pkl')
('BenchmarkUddinSO', 1)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/BenchmarkUddinSO_Train_1_model.pkl')
('BenchmarkUddinSO', 2)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/BenchmarkUddinSO_Train_2_model.pkl')
('BenchmarkUddinSO', 3)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/BenchmarkUddinSO_Train_3_model.pkl')
('BenchmarkUddinSO', 4)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/BenchmarkUddinSO_Train_4_model.pkl')
('BenchmarkUddinSO', 5)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/BenchmarkUddinSO_Train_5_model.pkl')
('BenchmarkUddinSO', 6)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/BenchmarkUddinSO_Train_6_model.pkl')
('BenchmarkUddinSO', 7)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/BenchmarkUddinSO_Train_7_model.pkl')
('BenchmarkUddinSO', 8)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/BenchmarkUddinSO_Train_8_model.pkl')
('BenchmarkUddinSO', 9)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/BenchmarkUddinSO_Train_9_model.pkl')
('DatasetLinAppReviews', 0)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinAppReviews_Train_0_model.pkl')
('DatasetLinAppReviews', 1)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinAppReviews_Train_1_model.pkl')
('DatasetLinAppReviews', 2)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinAppReviews_Train_2_model.pkl')
('DatasetLinAppReviews', 3)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinAppReviews_Train_3_model.pkl')
('DatasetLinAppReviews', 4)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinAppReviews_Train_4_model.pkl')
('DatasetLinAppReviews', 5)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinAppReviews_Train_5_model.pkl')
('DatasetLinAppReviews', 6)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinAppReviews_Train_6_model.pkl')
('DatasetLinAppReviews', 7)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinAppReviews_Train_7_model.pkl')
('DatasetLinAppReviews', 8)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinAppReviews_Train_8_model.pkl')
('DatasetLinAppReviews', 9)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinAppReviews_Train_9_model.pkl')
('DatasetLinSO', 0)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinSO_Train_0_model.pkl')
('DatasetLinSO', 1)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinSO_Train_1_model.pkl')
('DatasetLinSO', 2)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinSO_Train_2_model.pkl')
('DatasetLinSO', 3)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinSO_Train_3_model.pkl')
('DatasetLinSO', 4)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinSO_Train_4_model.pkl')
('DatasetLinSO', 5)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinSO_Train_5_model.pkl')
('DatasetLinSO', 6)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinSO_Train_6_model.pkl')
('DatasetLinSO', 7)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinSO_Train_7_model.pkl')
('DatasetLinSO', 8)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinSO_Train_8_model.pkl')
('DatasetLinSO', 9)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetLinSO_Train_9_model.pkl')
('DatasetSenti4SDSO', 0)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetSenti4SDSO_Train_0_model.pkl')
('DatasetSenti4SDSO', 1)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetSenti4SDSO_Train_1_model.pkl')
('DatasetSenti4SDSO', 2)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetSenti4SDSO_Train_2_model.pkl')
('DatasetSenti4SDSO', 3)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetSenti4SDSO_Train_3_model.pkl')
('DatasetSenti4SDSO', 4)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetSenti4SDSO_Train_4_model.pkl')
('DatasetSenti4SDSO', 5)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetSenti4SDSO_Train_5_model.pkl')
('DatasetSenti4SDSO', 6)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetSenti4SDSO_Train_6_model.pkl')
('DatasetSenti4SDSO', 7)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetSenti4SDSO_Train_7_model.pkl')
('DatasetSenti4SDSO', 8)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetSenti4SDSO_Train_8_model.pkl')
('DatasetSenti4SDSO', 9)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/DatasetSenti4SDSO_Train_9_model.pkl')
('OrtuJIRA', 0)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/OrtuJIRA_Train_0_model.pkl')
('OrtuJIRA', 1)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/OrtuJIRA_Train_1_model.pkl')
('OrtuJIRA', 2)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/OrtuJIRA_Train_2_model.pkl')
('OrtuJIRA', 3)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/OrtuJIRA_Train_3_model.pkl')
('OrtuJIRA', 4)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/OrtuJIRA_Train_4_model.pkl')
('OrtuJIRA', 5)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/OrtuJIRA_Train_5_model.pkl')
('OrtuJIRA', 6)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/OrtuJIRA_Train_6_model.pkl')
('OrtuJIRA', 7)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/OrtuJIRA_Train_7_model.pkl')
('OrtuJIRA', 8)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/OrtuJIRA_Train_8_model.pkl')
('OrtuJIRA', 9)
('Algo = ', 'RF')
Training ....
Reading data from oracle..
Training classifier model..
('saving model ', '/Users/alamin/research/sentiment/sentisead/output/Sentisead_RF/OrtuJIRA_Train_9_model.pkl')
About to call consolidateResults
Consolidated result has been writen to /Users/alamin/research/sentiment/sentisead/output/consolidated/ResultsConsolidated_RF.xls
consolidated results methods has been called
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=150, n_jobs=None,
oob_score=False, random_state=None, verbose=0,
warm_start=False)
--------------------------------------------------------------------------------
Vectorizer
TfidfVectorizer(analyzer=u'word', binary=False, decode_error=u'strict',
dtype=<type 'numpy.float64'>, encoding=u'utf-8', input=u'content',
lowercase=True, max_df=0.5, max_features=None, min_df=3,
ngram_range=(1, 1), norm=u'l2', preprocessor=None, smooth_idf=True,
stop_words=None, strip_accents=None, sublinear_tf=True,
token_pattern=u'(?u)\\b\\w\\w+\\b',
tokenizer=<function tokenize_and_stem at 0x15c263ed8>,
use_idf=True, vocabulary=None)
--------------------------------------------------------------------------------
Overall Performance
Label = p. Precision = 0.833. Recall = 0.699. F1 = 0.760
Label = o. Precision = 0.787. Recall = 0.906. F1 = 0.842
Label = n. Precision = 0.824. Recall = 0.662. F1 = 0.734
F1 Macro = 0.784. Micro = 0.803
Macro Precision = 0.815. Recall = 0.755
-------------------------------
--------------------------------------------------------------------------------
By File Performance
--------------------------------------------------------------------------------
File DatasetLinJIRA. Label = p. Precision = 0.97. Recall = 0.93. F1 = 0.95
File DatasetLinJIRA. Label = n. Precision = 0.97. Recall = 0.99. F1 = 0.98
File = DatasetLinJIRA. F1 Macro = 0.96. Micro = 0.97
-------------------------------
File BenchmarkUddinSO. Label = p. Precision = 0.57. Recall = 0.23. F1 = 0.33
File BenchmarkUddinSO. Label = o. Precision = 0.64. Recall = 0.93. F1 = 0.76
File BenchmarkUddinSO. Label = n. Precision = 0.62. Recall = 0.20. F1 = 0.31
File = BenchmarkUddinSO. F1 Macro = 0.52. Micro = 0.63
-------------------------------
File DatasetLinAppReviews. Label = p. Precision = 0.85. Recall = 0.91. F1 = 0.88
File DatasetLinAppReviews. Label = o. Precision = nan. Recall = 0.00. F1 = nan
File DatasetLinAppReviews. Label = n. Precision = 0.79. Recall = 0.86. F1 = 0.83
File = DatasetLinAppReviews. F1 Macro = 0.57. Micro = 0.83
-------------------------------
File DatasetLinSO. Label = p. Precision = 0.69. Recall = 0.18. F1 = 0.29
File DatasetLinSO. Label = o. Precision = 0.85. Recall = 0.97. F1 = 0.90
File DatasetLinSO. Label = n. Precision = 0.60. Recall = 0.34. F1 = 0.44
File = DatasetLinSO. F1 Macro = 0.59. Micro = 0.83
-------------------------------
File DatasetSenti4SDSO. Label = p. Precision = 0.91. Recall = 0.94. F1 = 0.93
File DatasetSenti4SDSO. Label = o. Precision = 0.86. Recall = 0.80. F1 = 0.83
File DatasetSenti4SDSO. Label = n. Precision = 0.80. Recall = 0.85. F1 = 0.83
File = DatasetSenti4SDSO. F1 Macro = 0.86. Micro = 0.86
-------------------------------
File OrtuJIRA. Label = p. Precision = 0.79. Recall = 0.78. F1 = 0.78
File OrtuJIRA. Label = o. Precision = 0.87. Recall = 0.92. F1 = 0.90
File OrtuJIRA. Label = n. Precision = 0.86. Recall = 0.64. F1 = 0.73
File = OrtuJIRA. F1 Macro = 0.81. Micro = 0.86
-------------------------------
###Markdown
BERT4SentiSE exploration Standalone performance of BERT4SentiSE
###Code
sead = seadTen.Sentisead(basedir=rootdir, infile=infile_disa)
print "Overall Performance BERT"
infile = os.path.join(datadir, "alamin", infile_disa_name)
sead.computePerformanceOverallOfLearner(infile=infile, learnerCol="BERT4SentiSE")
print "By File Performance"
print "-"*80
sead.computePerformancOfLearner(infile=infile, learnerCol="BERT4SentiSE")
###Output
BERT has been added to the feature list
Sentisead constructed
infile: /Users/alamin/research/sentiment/sentisead/data/alamin/Disa_ResultsConsolidatedWithEnsembleAssessment.xlsx
Overall Performance BERT
Label = p. Precision = 0.783. Recall = 0.785. F1 = 0.784
Label = o. Precision = 0.848. Recall = 0.861. F1 = 0.854
Label = n. Precision = 0.789. Recall = 0.756. F1 = 0.772
F1 Macro = 0.804. Micro = 0.820
Macro Precision = 0.807. Recall = 0.801
-------------------------------
By File Performance
--------------------------------------------------------------------------------
/Users/alamin/research/sentiment/sentisead/data/alamin/Disa_ResultsConsolidatedWithEnsembleAssessment.xlsx
Alamin edited
File DatasetLinJIRA. Label = p. Precision = 0.94. Recall = 0.95. F1 = 0.95
File DatasetLinJIRA. Label = n. Precision = 0.98. Recall = 0.97. F1 = 0.97
File = DatasetLinJIRA. F1 Macro = 0.96. Micro = 0.97
For latex table: F1 macro micro followed by positive=>Neutral=>Negative Precision Recall F1
File = DatasetLinJIRA & 0.96 & 0.97 0.94 & 0.95 & 0.95 0.00 & 0.00 & 0.00 0.98 & 0.97 & 0.97
-------------------------------
File BenchmarkUddinSO. Label = p. Precision = 0.52. Recall = 0.49. F1 = 0.50
File BenchmarkUddinSO. Label = o. Precision = 0.73. Recall = 0.79. F1 = 0.76
File BenchmarkUddinSO. Label = n. Precision = 0.53. Recall = 0.44. F1 = 0.48
File = BenchmarkUddinSO. F1 Macro = 0.58. Micro = 0.66
For latex table: F1 macro micro followed by positive=>Neutral=>Negative Precision Recall F1
File = BenchmarkUddinSO & 0.58 & 0.66 0.52 & 0.49 & 0.50 0.73 & 0.79 & 0.76 0.53 & 0.44 & 0.48
-------------------------------
File DatasetLinAppReviews. Label = p. Precision = 0.86. Recall = 0.86. F1 = 0.86
File DatasetLinAppReviews. Label = o. Precision = nan. Recall = 0.00. F1 = nan
File DatasetLinAppReviews. Label = n. Precision = 0.75. Recall = 0.89. F1 = 0.82
File = DatasetLinAppReviews. F1 Macro = 0.56. Micro = 0.81
For latex table: F1 macro micro followed by positive=>Neutral=>Negative Precision Recall F1
File = DatasetLinAppReviews & 0.56 & 0.81 0.86 & 0.86 & 0.86 nan & 0.00 & nan 0.75 & 0.89 & 0.82
-------------------------------
File DatasetLinSO. Label = p. Precision = 0.71. Recall = 0.43. F1 = 0.53
File DatasetLinSO. Label = o. Precision = 0.91. Recall = 0.94. F1 = 0.93
File DatasetLinSO. Label = n. Precision = 0.66. Recall = 0.72. F1 = 0.69
File = DatasetLinSO. F1 Macro = 0.73. Micro = 0.87
For latex table: F1 macro micro followed by positive=>Neutral=>Negative Precision Recall F1
File = DatasetLinSO & 0.73 & 0.87 0.71 & 0.43 & 0.53 0.91 & 0.94 & 0.93 0.66 & 0.72 & 0.69
-------------------------------
File DatasetSenti4SDSO. Label = p. Precision = 0.93. Recall = 0.94. F1 = 0.93
File DatasetSenti4SDSO. Label = o. Precision = 0.86. Recall = 0.85. F1 = 0.85
File DatasetSenti4SDSO. Label = n. Precision = 0.86. Recall = 0.86. F1 = 0.86
File = DatasetSenti4SDSO. F1 Macro = 0.88. Micro = 0.88
For latex table: F1 macro micro followed by positive=>Neutral=>Negative Precision Recall F1
File = DatasetSenti4SDSO & 0.88 & 0.88 0.93 & 0.94 & 0.93 0.86 & 0.85 & 0.85 0.86 & 0.86 & 0.86
-------------------------------
File OrtuJIRA. Label = p. Precision = 0.77. Recall = 0.84. F1 = 0.81
File OrtuJIRA. Label = o. Precision = 0.91. Recall = 0.89. F1 = 0.90
File OrtuJIRA. Label = n. Precision = 0.80. Recall = 0.75. F1 = 0.77
File = OrtuJIRA. F1 Macro = 0.83. Micro = 0.86
For latex table: F1 macro micro followed by positive=>Neutral=>Negative Precision Recall F1
File = OrtuJIRA & 0.83 & 0.86 0.77 & 0.84 & 0.81 0.91 & 0.89 & 0.90 0.80 & 0.75 & 0.77
-------------------------------
###Markdown
SentiMoji exploration Standalone performance of Sentimoji
###Code
infile_sentimoji = os.path.join(datadir, "alamin", infile_disa_name)
df = pd.read_excel(infile_sentimoji, sheet_name="Sheet1")
# df = df[df['File'].lower().str.contains('BenchmarkUddinSO')]
# print(len(df))
# .astype(str).str.lower().str.contains(dataset)
# print(type(df[df['File']]))
datasets = ["DatasetLinJIRA", "BenchmarkUddinSO", "DatasetLinAppReviews",
"DatasetLinSO", "DatasetSenti4SDSO", "OrtuJIRA"]
for dataset in datasets:
dataset = dataset.lower()
df2 = df[df['File'].str.lower().str.contains(dataset)]
print("len: %d and distribution %s " % (len(df2), df2['sentimoji_HotEncoded'].unique()))
print(df2['sentimoji_HotEncoded'].unique())
infile_sentimoji = os.path.join(datadir, "alamin", infile_disa_name)
sead = seadTen.Sentisead(basedir=rootdir, infile=infile_sentimoji)
# print "Standalone Overall Performance Sentimoji"
# sead.computePerformanceOverallOfLearner(infile=infile_sentimoji, learnerCol="sentimoji")
print "By File Performance"
print "-"*80
sead.computePerformancOfLearner(infile=infile_sentimoji, learnerCol="sentimoji")
###Output
BERT has been added to the feature list
Sentisead constructed
infile: /Users/alamin/research/sentiment/sentisead/data/alamin/Disa_ResultsConsolidatedWithEnsembleAssessment.xlsx
By File Performance
--------------------------------------------------------------------------------
/Users/alamin/research/sentiment/sentisead/data/alamin/Disa_ResultsConsolidatedWithEnsembleAssessment.xlsx
Alamin edited
[u'o', u'o', u'o', u'o', u'p', u'n', u'n', u'n', u'o', u'o', u'o', u'n', u'o', u'o', u'o', u'o', u'o', u'n', u'n', u'n', u'o', u'o', u'n', u'n', u'o', u'p', u'n', u'o', u'o', u'o', u'o', u'p', u'o', u'o', u'n', u'n', u'n', u'o', u'o', u'o', u'o', u'o', u'p', u'o', u'p', u'p', u'p', u'p', u'o', u'o']
[u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o', u'o']
Going to call P R F1 for lable p
===> precision method is called with label: p
CM is [[ 0 1048 0]
[ 0 2635 0]
[ 0 839 0]]
label: 0 and col: [0 0 0]
+===== Total 0 col: [0 0 0]
File BenchmarkUddinSO. Label = p. Precision = nan. Recall = 0.00. F1 = nan
Going to call P R F1 for lable o
===> precision method is called with label: o
CM is [[ 0 1048 0]
[ 0 2635 0]
[ 0 839 0]]
label: 1 and col: [1048 2635 839]
+===== Total 4522 col: [1048 2635 839]
File BenchmarkUddinSO. Label = o. Precision = 0.58. Recall = 1.00. F1 = 0.74
Going to call P R F1 for lable n
===> precision method is called with label: n
CM is [[ 0 1048 0]
[ 0 2635 0]
[ 0 839 0]]
label: 2 and col: [0 0 0]
+===== Total 0 col: [0 0 0]
File BenchmarkUddinSO. Label = n. Precision = nan. Recall = 0.00. F1 = nan
File = BenchmarkUddinSO. F1 Macro = 0.25. Micro = 0.58
For latex table: F1 macro micro followed by positive=>Neutral=>Negative Precision Recall F1
File = BenchmarkUddinSO & 0.25 & 0.58 nan & 0.00 & nan 0.58 & 1.00 & 0.74 nan & 0.00 & nan
-------------------------------
###Markdown
Sentisead exploration Sentisead baseline performance(without BERT4SentiSE/SentiMoji)
###Code
sead = seadTen.Sentisead(basedir=rootdir, infile_name=infile_disa)
infile = os.path.join(datadir, "alamin", "ResultsConsolidated_sentisead_RF.xls")
sead.computePerformanceOverallOfLearner(infile=infile, learnerCol="Sentisead")
print "By File Performance"
print "-"*80
sead.computePerformancOfLearner(infile=infile, learnerCol="Sentisead")
###Output
BERT has been added to the feature list
Sentisead constructed
Label = p. Precision = 0.833. Recall = 0.699. F1 = 0.760
Label = o. Precision = 0.787. Recall = 0.906. F1 = 0.842
Label = n. Precision = 0.824. Recall = 0.662. F1 = 0.734
F1 Macro = 0.784. Micro = 0.803
Macro Precision = 0.815. Recall = 0.755
-------------------------------
By File Performance
--------------------------------------------------------------------------------
/Users/alamin/research/sentiment/sentisead/data/ResultsConsolidated_RF_alamin.xls
File DatasetLinJIRA. Label = p. Precision = 0.97. Recall = 0.93. F1 = 0.95
File DatasetLinJIRA. Label = n. Precision = 0.97. Recall = 0.99. F1 = 0.98
File = DatasetLinJIRA. F1 Macro = 0.96. Micro = 0.97
-------------------------------
File BenchmarkUddinSO. Label = p. Precision = 0.57. Recall = 0.23. F1 = 0.33
File BenchmarkUddinSO. Label = o. Precision = 0.64. Recall = 0.93. F1 = 0.76
File BenchmarkUddinSO. Label = n. Precision = 0.62. Recall = 0.20. F1 = 0.31
File = BenchmarkUddinSO. F1 Macro = 0.52. Micro = 0.63
-------------------------------
File DatasetLinAppReviews. Label = p. Precision = 0.85. Recall = 0.91. F1 = 0.88
File DatasetLinAppReviews. Label = o. Precision = nan. Recall = 0.00. F1 = nan
File DatasetLinAppReviews. Label = n. Precision = 0.79. Recall = 0.86. F1 = 0.83
File = DatasetLinAppReviews. F1 Macro = 0.57. Micro = 0.83
-------------------------------
File DatasetLinSO. Label = p. Precision = 0.69. Recall = 0.18. F1 = 0.29
File DatasetLinSO. Label = o. Precision = 0.85. Recall = 0.97. F1 = 0.90
File DatasetLinSO. Label = n. Precision = 0.60. Recall = 0.34. F1 = 0.44
File = DatasetLinSO. F1 Macro = 0.59. Micro = 0.83
-------------------------------
File DatasetSenti4SDSO. Label = p. Precision = 0.91. Recall = 0.94. F1 = 0.93
File DatasetSenti4SDSO. Label = o. Precision = 0.86. Recall = 0.80. F1 = 0.83
File DatasetSenti4SDSO. Label = n. Precision = 0.80. Recall = 0.85. F1 = 0.83
File = DatasetSenti4SDSO. F1 Macro = 0.86. Micro = 0.86
-------------------------------
File OrtuJIRA. Label = p. Precision = 0.79. Recall = 0.78. F1 = 0.78
File OrtuJIRA. Label = o. Precision = 0.87. Recall = 0.92. F1 = 0.90
File OrtuJIRA. Label = n. Precision = 0.86. Recall = 0.64. F1 = 0.73
File = OrtuJIRA. F1 Macro = 0.81. Micro = 0.86
-------------------------------
###Markdown
Sentisead performance after adding BERT4SentiSE
###Code
sead = seadTen.Sentisead(basedir=rootdir, infile_name=infile_disa)
# sead.pipeline(algo="RF", ngram=1)
infile = os.path.join(datadir, "alamin", "ResultsConsolidated_bert_sentisead_RF.xls")
sead.computePerformanceOverallOfLearner(infile=infile, learnerCol="Sentisead")
print "By File Performance"
print "-"*80
sead.computePerformancOfLearner(infile=infile, learnerCol="Sentisead")
###Output
Label = p. Precision = 0.836. Recall = 0.710. F1 = 0.768
Label = o. Precision = 0.796. Recall = 0.906. F1 = 0.848
Label = n. Precision = 0.831. Recall = 0.685. F1 = 0.751
F1 Macro = 0.793. Micro = 0.811
Macro Precision = 0.821. Recall = 0.767
-------------------------------
/Users/alamin/research/sentiment/sentisead/data/DisaResultsConsolidated_RF.xls
File DatasetLinJIRA. Label = p. Precision = 0.97. Recall = 0.93. F1 = 0.95
File DatasetLinJIRA. Label = n. Precision = 0.97. Recall = 0.99. F1 = 0.98
File = DatasetLinJIRA. F1 Macro = 0.96. Micro = 0.97
-------------------------------
File BenchmarkUddinSO. Label = p. Precision = 0.55. Recall = 0.23. F1 = 0.33
File BenchmarkUddinSO. Label = o. Precision = 0.64. Recall = 0.92. F1 = 0.76
File BenchmarkUddinSO. Label = n. Precision = 0.58. Recall = 0.20. F1 = 0.29
File = BenchmarkUddinSO. F1 Macro = 0.51. Micro = 0.63
-------------------------------
File DatasetLinAppReviews. Label = p. Precision = 0.84. Recall = 0.92. F1 = 0.88
File DatasetLinAppReviews. Label = o. Precision = nan. Recall = 0.00. F1 = nan
File DatasetLinAppReviews. Label = n. Precision = 0.80. Recall = 0.85. F1 = 0.83
File = DatasetLinAppReviews. F1 Macro = 0.57. Micro = 0.83
-------------------------------
File DatasetLinSO. Label = p. Precision = 0.79. Recall = 0.23. F1 = 0.36
File DatasetLinSO. Label = o. Precision = 0.88. Recall = 0.97. F1 = 0.92
File DatasetLinSO. Label = n. Precision = 0.73. Recall = 0.58. F1 = 0.64
File = DatasetLinSO. F1 Macro = 0.68. Micro = 0.86
-------------------------------
File DatasetSenti4SDSO. Label = p. Precision = 0.92. Recall = 0.94. F1 = 0.93
File DatasetSenti4SDSO. Label = o. Precision = 0.87. Recall = 0.82. F1 = 0.85
File DatasetSenti4SDSO. Label = n. Precision = 0.83. Recall = 0.87. F1 = 0.85
File = DatasetSenti4SDSO. F1 Macro = 0.88. Micro = 0.88
-------------------------------
File OrtuJIRA. Label = p. Precision = 0.79. Recall = 0.80. F1 = 0.80
File OrtuJIRA. Label = o. Precision = 0.89. Recall = 0.92. F1 = 0.90
File OrtuJIRA. Label = n. Precision = 0.84. Recall = 0.68. F1 = 0.75
File = OrtuJIRA. F1 Macro = 0.82. Micro = 0.86
-------------------------------
###Markdown
Sentisead performance after adding Sentimoji
###Code
sead = seadTen.Sentisead(basedir=rootdir, infile_name=infile_disa)
# make sure sentimoji column is added.
sead.pipeline(algo="RF", ngram=1)
sead = seadTen.Sentisead(basedir=rootdir, infile_name=infile_disa)
# sead.pipeline(algo="RF", ngram=1)
infile = os.path.join(datadir, "alamin", "ResultsConsolidated_sentimoji_sentisead_RF.xls")
sead.computePerformanceOverallOfLearner(infile=infile, learnerCol="Sentisead")
print "By File Performance"
print "-"*80
sead.computePerformancOfLearner(infile=infile, learnerCol="Sentisead")
###Output
BERT has been added to the feature list
Sentisead constructed
Label = p. Precision = 0.836. Recall = 0.705. F1 = 0.765
Label = o. Precision = 0.789. Recall = 0.904. F1 = 0.843
Label = n. Precision = 0.824. Recall = 0.668. F1 = 0.738
F1 Macro = 0.787. Micro = 0.805
Macro Precision = 0.816. Recall = 0.759
-------------------------------
By File Performance
--------------------------------------------------------------------------------
/Users/alamin/research/sentiment/sentisead/data/alamin/ResultsConsolidated_sentimoji_sentisead_RF.xls
File DatasetLinJIRA. Label = p. Precision = 0.97. Recall = 0.93. F1 = 0.95
File DatasetLinJIRA. Label = n. Precision = 0.97. Recall = 0.99. F1 = 0.98
File = DatasetLinJIRA. F1 Macro = 0.97. Micro = 0.97
-------------------------------
File BenchmarkUddinSO. Label = p. Precision = 0.56. Recall = 0.24. F1 = 0.34
File BenchmarkUddinSO. Label = o. Precision = 0.64. Recall = 0.92. F1 = 0.76
File BenchmarkUddinSO. Label = n. Precision = 0.59. Recall = 0.21. F1 = 0.31
File = BenchmarkUddinSO. F1 Macro = 0.52. Micro = 0.63
-------------------------------
File DatasetLinAppReviews. Label = p. Precision = 0.90. Recall = 0.95. F1 = 0.92
File DatasetLinAppReviews. Label = o. Precision = nan. Recall = 0.00. F1 = nan
File DatasetLinAppReviews. Label = n. Precision = 0.83. Recall = 0.93. F1 = 0.88
File = DatasetLinAppReviews. F1 Macro = 0.60. Micro = 0.87
-------------------------------
File DatasetLinSO. Label = p. Precision = 0.75. Recall = 0.21. F1 = 0.32
File DatasetLinSO. Label = o. Precision = 0.85. Recall = 0.97. F1 = 0.91
File DatasetLinSO. Label = n. Precision = 0.60. Recall = 0.39. F1 = 0.47
File = DatasetLinSO. F1 Macro = 0.61. Micro = 0.83
-------------------------------
File DatasetSenti4SDSO. Label = p. Precision = 0.92. Recall = 0.94. F1 = 0.93
File DatasetSenti4SDSO. Label = o. Precision = 0.85. Recall = 0.81. F1 = 0.83
File DatasetSenti4SDSO. Label = n. Precision = 0.81. Recall = 0.84. F1 = 0.83
File = DatasetSenti4SDSO. F1 Macro = 0.86. Micro = 0.86
-------------------------------
File OrtuJIRA. Label = p. Precision = 0.79. Recall = 0.78. F1 = 0.78
File OrtuJIRA. Label = o. Precision = 0.88. Recall = 0.92. F1 = 0.90
File OrtuJIRA. Label = n. Precision = 0.85. Recall = 0.66. F1 = 0.74
File = OrtuJIRA. F1 Macro = 0.81. Micro = 0.86
-------------------------------
###Markdown
Sentisead performance with Bert4sentise and sentimoji
###Code
print("Infile: %s" % (infile_disa))
sead = seadTen.Sentisead(basedir=rootdir, infile=infile_disa)
# make sure sentimoji column is added.
sead.pipeline(algo="RF", ngram=1)
sead = seadTen.Sentisead(basedir=rootdir, infile=infile_disa)
# sead.pipeline(algo="RF", ngram=1)
infile = os.path.join(datadir, "alamin", "ResultsConsolidated_bert_sentimoji_sentisead_RF.xls")
sead.computePerformanceOverallOfLearner(infile=infile, learnerCol="Sentisead")
print "By File Performance"
print "-"*80
sead.computePerformancOfLearner(infile=infile, learnerCol="Sentisead")
###Output
BERT has been added to the feature list
Sentisead constructed
infile: /Users/alamin/research/sentiment/sentisead/data/alamin/Disa_ResultsConsolidatedWithEnsembleAssessment.xlsx
Label = p. Precision = 0.831. Recall = 0.729. F1 = 0.777
Label = o. Precision = 0.809. Recall = 0.899. F1 = 0.852
Label = n. Precision = 0.830. Recall = 0.714. F1 = 0.768
F1 Macro = 0.802. Micro = 0.818
Macro Precision = 0.823. Recall = 0.781
-------------------------------
By File Performance
--------------------------------------------------------------------------------
/Users/alamin/research/sentiment/sentisead/data/alamin/ResultsConsolidated_bert_sentimoji_sentisead_RF.xls
File DatasetLinJIRA. Label = p. Precision = 0.97. Recall = 0.94. F1 = 0.95
File DatasetLinJIRA. Label = n. Precision = 0.97. Recall = 0.99. F1 = 0.98
File = DatasetLinJIRA. F1 Macro = 0.97. Micro = 0.97
-------------------------------
File BenchmarkUddinSO. Label = p. Precision = 0.58. Recall = 0.32. F1 = 0.41
File BenchmarkUddinSO. Label = o. Precision = 0.67. Recall = 0.90. F1 = 0.77
File BenchmarkUddinSO. Label = n. Precision = 0.60. Recall = 0.30. F1 = 0.40
File = BenchmarkUddinSO. F1 Macro = 0.56. Micro = 0.65
-------------------------------
File DatasetLinAppReviews. Label = p. Precision = 0.91. Recall = 0.94. F1 = 0.92
File DatasetLinAppReviews. Label = o. Precision = nan. Recall = 0.00. F1 = nan
File DatasetLinAppReviews. Label = n. Precision = 0.82. Recall = 0.94. F1 = 0.87
File = DatasetLinAppReviews. F1 Macro = 0.60. Micro = 0.87
-------------------------------
File DatasetLinSO. Label = p. Precision = 0.74. Recall = 0.24. F1 = 0.37
File DatasetLinSO. Label = o. Precision = 0.88. Recall = 0.96. F1 = 0.92
File DatasetLinSO. Label = n. Precision = 0.70. Recall = 0.61. F1 = 0.65
File = DatasetLinSO. F1 Macro = 0.68. Micro = 0.86
-------------------------------
File DatasetSenti4SDSO. Label = p. Precision = 0.93. Recall = 0.95. F1 = 0.94
File DatasetSenti4SDSO. Label = o. Precision = 0.87. Recall = 0.84. F1 = 0.85
File DatasetSenti4SDSO. Label = n. Precision = 0.85. Recall = 0.86. F1 = 0.85
File = DatasetSenti4SDSO. F1 Macro = 0.88. Micro = 0.88
-------------------------------
File OrtuJIRA. Label = p. Precision = 0.78. Recall = 0.79. F1 = 0.79
File OrtuJIRA. Label = o. Precision = 0.88. Recall = 0.91. F1 = 0.90
File OrtuJIRA. Label = n. Precision = 0.84. Recall = 0.70. F1 = 0.76
File = OrtuJIRA. F1 Macro = 0.82. Micro = 0.86
-------------------------------
|
jupyter_notebooks/PhiSpy.ipynb | ###Markdown
PhiSpyThis is a Jupyter Notebook that shows how to run PhiSpy manually. You can run through all the steps that PhiSpy takes to determine whether a genome contains a prophage, and inspect all of the data generated by PhiSpy.You will need to install [PhiSpy](https://github.com/linsalrob/PhiSpyinstallation), [Jupyter Notebooks](https://jupyter.org/install), and [pandas](https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html)Note: PhiSpy does not normally use pandas, but we use it here to visualize the data!
###Code
# set up the environment
import os
import sys
import gzip
from functools import reduce
import tempfile
import pandas as pd
from Bio import SeqIO
import PhiSpyModules
###Output
_____no_output_____
###Markdown
Check the PhiSpy versionWe recommend at least version 4.0.3 but preferable 4.1 or higher
###Code
print("PhiSpy version: " + PhiSpyModules.__version__)
###Output
PhiSpy version: 4.1.10
###Markdown
Define your genbank file here.You may need to set the full path to the file. You can use gzip compressed file or an uncompressed file. Obviously, you will know whether it is compressed or not, but this demonstrates how PhiSpy determines when a file is compressed.**This should be the only line you need to change to run PhiSpy completely!**
###Code
genbankfile = "../test_genbank_files/Yersinia_pestis_KIM.gb.gz"
###Output
_____no_output_____
###Markdown
Parse the fileWe use BioPython to parse the file, but we also add a few additional mehtods to the standard BioPython object to ease parsing. We also merge or split compound features (those with more than one location along the chromosome) to appropriately handle them.Our `record` object is an extended `SeqIO.parse` object.
###Code
min_contig_size = 1000
if PhiSpyModules.is_gzip_file(genbankfile):
handle = gzip.open(genbankfile, 'rt')
else:
handle = open(genbankfile, 'r')
record = PhiSpyModules.SeqioFilter(filter(lambda x: len(x.seq) > min_contig_size, SeqIO.parse(handle, "genbank")))
handle.close()
# we check to make sure there are some contigs left to process
ncontigs = reduce(lambda sum, element: sum + 1, record, 0)
print(f"There are {ncontigs} contigs to predict prophages on!")
###Output
There are 6 contigs to predict prophages on!
###Markdown
Define the parameters that we will useThese are normally provided as command line options, but for jupyter we set them hereparamter | meaning | options | default value--- | --- | --- | --kmers_type | What do we count kmers with? | `all`, `codon`, `simple` | `all`window_size | How many consecutive ORFs to include? | an integer | 30record | the `Bio.SeqIO` object with all the sequences | a `Bio.SeqIO` object | `record`expand_slope | whether to use the square of the slope of the Shannon scores | `True` or `False` | `False`number | Number of consecutive genes in a region of window size that must be prophage genes | an integer | 5nonprophage_genegaps | The number of non phage genes betweeen prophages | an integer | 10quiet | Don't make additional outputs | `True` or `False` | `True`*Note*: You can add an additional paramter, `make_training_data` here (its actual value doesn't matter) that will append an additional column to the output for each ORF that includes a `1` (`True`) if the ORF is thought or stated to be a phage gene or `0` (`False`) otherwise.
###Code
parameters = {
'kmers_type': "all",
'window_size': 30,
'record': record,
'expand_slope': False,
'training_set': "data/trainSet_genericAll.txt",
'randomforest_trees': 5,
'threads': 4,
'quiet': True,
'nonprophage_genegaps': 10,
'number': 5,
'color' : True,
'evaluate':False,
'make_training_data':None,
'skip_search':False,
'phmms':None,
'phage_genes' : 2,
'metrics': ['orf_length_med', 'shannon_slope', 'at_skew', 'gc_skew', 'max_direction']
}
###Output
_____no_output_____
###Markdown
Generate the test dataThis is the step that actually does all the measurements!In this example, we convert the output to a pandas dataframe for visualization and exploration.
###Code
parameters['test_data'] = PhiSpyModules.measure_features(**parameters)
# note that if you include make_training_data you will need to add an "is_phage" column here
test_df = pd.DataFrame(parameters['test_data'], columns = ['orf_length_med', 'shannon_slope', 'at_skew',
'gc_skew', 'max_direction', 'phmms'])
test_df.head()
###Output
_____no_output_____
###Markdown
Run the random forestHere we run the random forest to identify the phages, and combine that into our initial table as the `rank` column.
###Code
parameters['rfdata'] = PhiSpyModules.call_randomforest(**parameters)
parameters['initial_tbl'] = PhiSpyModules.make_initial_tbl(**parameters)
parameters['output_dir'] = tempfile.mkdtemp()
initial_table_df = pd.DataFrame(parameters['initial_tbl'], columns = ['gene id', 'function', 'contig', 'start', 'stop', 'position', 'rank', 'my status', 'pp'])
initial_table_df.head()
###Output
_____no_output_____
###Markdown
Refine the predictionsFinally, we refine the predictions from the random forest and other metrics, and then predict the *att* sequences.
###Code
parameters['pp'] = PhiSpyModules.fixing_start_end(**parameters)
pp_df = pd.DataFrame.from_dict(parameters['pp']).transpose()
pp_df
###Output
_____no_output_____
###Markdown
Our `pp_df` data frame has our final prophage predictions for this genome! Make the final tableHere we just append the pp number of the prophage to the table to show what are pp regions in the data frame.
###Code
parameters['final_tbl'] = []
for i in parameters['initial_tbl']:
my_fs = PhiSpyModules.evaluation.check_pp(i[2], i[3], i[4], parameters['pp'])
parameters['final_tbl'].append(i + [my_fs])
final_df = pd.DataFrame(parameters['final_tbl'], columns = ['gene id', 'function', 'contig', 'start', 'stop', 'position', 'rank', 'my status', 'pp', 'final status'])
final_df.head()
###Output
_____no_output_____ |
CODS_COMAD/SDC on all datasets/type2_focus_linear_classify_mlp2.ipynb | ###Markdown
Generate dataset
###Code
np.random.seed(12)
y = np.random.randint(0,10,5000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((5000,2))
np.random.seed(12)
x[idx[0],:] = np.random.multivariate_normal(mean = [5,5],cov=[[0.1,0],[0,0.1]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [-6,7],cov=[[0.1,0],[0,0.1]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [-5,-4],cov=[[0.1,0],[0,0.1]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [-1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean = [0,2],cov=[[0.1,0],[0,0.1]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [0,-1],cov=[[0.1,0],[0,0.1]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [0,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [-0.5,-0.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [0.4,0.2],cov=[[0.1,0],[0,0.1]],size=sum(idx[9]))
x[idx[0]][0], x[idx[5]][5]
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
bg_idx = [ np.where(idx[3] == True)[0],
np.where(idx[4] == True)[0],
np.where(idx[5] == True)[0],
np.where(idx[6] == True)[0],
np.where(idx[7] == True)[0],
np.where(idx[8] == True)[0],
np.where(idx[9] == True)[0]]
bg_idx = np.concatenate(bg_idx, axis = 0)
bg_idx.shape
np.unique(bg_idx).shape
x = x - np.mean(x[bg_idx], axis = 0, keepdims = True)
np.mean(x[bg_idx], axis = 0, keepdims = True), np.mean(x, axis = 0, keepdims = True)
x = x/np.std(x[bg_idx], axis = 0, keepdims = True)
np.std(x[bg_idx], axis = 0, keepdims = True), np.std(x, axis = 0, keepdims = True)
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
foreground_classes = {'class_0','class_1', 'class_2'}
background_classes = {'class_3','class_4', 'class_5', 'class_6','class_7', 'class_8', 'class_9'}
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
a.shape
np.reshape(a,(18,1)) #not required
a=np.reshape(a,(9,2))
plt.imshow(a)
desired_num = 2000
mosaic_list_of_images =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
np.random.seed(j)
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_of_images.append(a)
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
# mosaic_list_of_images = np.concatenate(mosaic_list_of_images,axis=1).T
len(mosaic_list_of_images), mosaic_list_of_images[0]
mosaic_list_of_images_reshaped = np.reshape(mosaic_list_of_images, (2000,9,2))
mean_train = np.mean(mosaic_list_of_images_reshaped[0:1000], axis=0, keepdims= True)
print(mean_train.shape, mean_train)
std_train = np.std(mosaic_list_of_images_reshaped[0:1000], axis=0, keepdims= True)
print(std_train.shape, std_train)
mosaic_list_of_images = ( mosaic_list_of_images_reshaped - mean_train ) / std_train
print(np.mean(mosaic_list_of_images[0:1000], axis=0, keepdims= True))
print(np.std(mosaic_list_of_images[0:1000], axis=0, keepdims= True))
print(np.mean(mosaic_list_of_images[1000:2000], axis=0, keepdims= True))
print(np.std(mosaic_list_of_images[1000:2000], axis=0, keepdims= True))
mosaic_list_of_images.shape
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label, fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx], self.fore_idx[idx]
batch = 250
msd1 = MosaicDataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000] , fore_idx[0:1000])
train_loader = DataLoader( msd1 ,batch_size= batch ,shuffle=True)
batch = 250
msd2 = MosaicDataset(mosaic_list_of_images[1000:2000], mosaic_label[1000:2000] , fore_idx[1000:2000])
test_loader = DataLoader( msd2 ,batch_size= batch ,shuffle=True)
class Focus(nn.Module):
def __init__(self):
super(Focus, self).__init__()
self.fc1 = nn.Linear(2, 1, bias=False)
torch.nn.init.zeros_(self.fc1.weight)
# self.fc2 = nn.Linear(50, 10)
# self.fc3 = nn.Linear(10, 1)
def forward(self,z): #y is avg image #z batch of list of 9 images
y = torch.zeros([batch,2], dtype=torch.float64)
x = torch.zeros([batch,9],dtype=torch.float64)
y = y.to("cuda")
x = x.to("cuda")
# print(x.shape, z.shape)
for i in range(9):
# print(z[:,i].shape)
# print(self.helper(z[:,i])[:,0].shape)
x[:,i] = self.helper(z[:,i])[:,0]
# print(x.shape, z.shape)
x = F.softmax(x,dim=1)
# print(x.shape, z.shape)
# x1 = x[:,0]
# print(torch.mul(x[:,0],z[:,0]).shape)
for i in range(9):
# x1 = x[:,i]
y = y + torch.mul(x[:,i,None],z[:,i])
# print(x.shape, y.shape)
return x, y
def helper(self, x):
x = x.view(-1, 2)
# x = F.relu(self.fc1(x))
# x = F.relu(self.fc2(x))
x = (self.fc1(x))
return x
class Classification(nn.Module):
def __init__(self):
super(Classification, self).__init__()
self.fc1 = nn.Linear(2, 50)
self.fc2 = nn.Linear(50, 10)
self.fc3 = nn.Linear(10, 3)
torch.nn.init.xavier_normal_(self.fc1.weight)
torch.nn.init.zeros_(self.fc1.bias)
torch.nn.init.xavier_normal_(self.fc2.weight)
torch.nn.init.zeros_(self.fc2.bias)
torch.nn.init.xavier_normal_(self.fc3.weight)
torch.nn.init.zeros_(self.fc3.bias)
def forward(self, x):
x = x.view(-1, 2)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = (self.fc3(x))
return x
torch.manual_seed(12)
focus_net = Focus().double()
focus_net = focus_net.to("cuda")
torch.manual_seed(12)
classify = Classification().double()
classify = classify.to("cuda")
focus_net.helper( torch.randn((1,9,2)).double().to("cuda") )
focus_net.fc1.weight
classify.fc1.weight, classify.fc1.bias, classify.fc1.weight.shape
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer_classify = optim.Adam(classify.parameters(), lr=0.01 ) #, momentum=0.9)
optimizer_focus = optim.Adam(focus_net.parameters(), lr=0.01 ) #, momentum=0.9)
col1=[]
col2=[]
col3=[]
col4=[]
col5=[]
col6=[]
col7=[]
col8=[]
col9=[]
col10=[]
col11=[]
col12=[]
col13=[]
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
# print(outputs.shape)
_, predicted = torch.max(outputs.data, 1)
# print(predicted.shape)
for j in range(labels.size(0)):
count += 1
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
# print(focus, fore_idx[j], predicted[j])
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 1000 train images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
print(count)
print("="*100)
col1.append(0)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 1000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
nos_epochs = 1000
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
for epoch in range(nos_epochs): # loop over the dataset multiple times
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
running_loss = 0.0
epoch_loss = []
cnt=0
iteration = desired_num // batch
#training data set
for i, data in enumerate(train_loader):
inputs , labels , fore_idx = data
inputs, labels = inputs.to("cuda"), labels.to("cuda")
inputs = inputs.double()
# zero the parameter gradients
optimizer_focus.zero_grad()
optimizer_classify.zero_grad()
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
# print(outputs)
# print(outputs.shape,labels.shape , torch.argmax(outputs, dim=1))
loss = criterion(outputs, labels)
loss.backward()
optimizer_focus.step()
optimizer_classify.step()
running_loss += loss.item()
mini = 3
if cnt % mini == mini-1: # print every 40 mini-batches
print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini))
epoch_loss.append(running_loss/mini)
running_loss = 0.0
cnt=cnt+1
if epoch % 5 == 0:
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
if(np.mean(epoch_loss) <= 0.001):
break;
if epoch % 5 == 0:
col1.append(epoch + 1)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
# print("="*20)
# print("Train FTPT : ", col4)
# print("Train FFPT : ", col5)
#************************************************************************
#testing data set
# focus_net.eval()
with torch.no_grad():
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
for data in test_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
# print("Test FTPT : ", col10)
# print("Test FFPT : ", col11)
# print("="*20)
print('Finished Training')
df_train = pd.DataFrame()
df_test = pd.DataFrame()
columns = ["epochs", "argmax > 0.5" ,"argmax < 0.5", "focus_true_pred_true", "focus_false_pred_true", "focus_true_pred_false", "focus_false_pred_false" ]
df_train[columns[0]] = col1
df_train[columns[1]] = col2
df_train[columns[2]] = col3
df_train[columns[3]] = col4
df_train[columns[4]] = col5
df_train[columns[5]] = col6
df_train[columns[6]] = col7
df_test[columns[0]] = col1
df_test[columns[1]] = col8
df_test[columns[2]] = col9
df_test[columns[3]] = col10
df_test[columns[4]] = col11
df_test[columns[5]] = col12
df_test[columns[6]] = col13
df_train
# plt.figure(12,12)
plt.plot(col1,col2, label='argmax > 0.5')
plt.plot(col1,col3, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.title("On Training set")
plt.show()
plt.plot(col1,col4, label ="focus_true_pred_true ")
plt.plot(col1,col5, label ="focus_false_pred_true ")
plt.plot(col1,col6, label ="focus_true_pred_false ")
plt.plot(col1,col7, label ="focus_false_pred_false ")
plt.title("On Training set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.show()
plt.figure(figsize=(6,5))
plt.plot(col1,np.array(col4)/10, label ="FTPT")
plt.plot(col1,np.array(col5)/10, label ="FFPT")
plt.plot(col1,np.array(col6)/10, label ="FTPF")
plt.plot(col1,np.array(col7)/10, label ="FFPF")
plt.title("Dataset2 - SDC On Train set")
plt.grid()
# plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.legend()
plt.xlabel("epochs", fontsize=14, fontweight = 'bold')
plt.ylabel("percentage train data", fontsize=14, fontweight = 'bold')
plt.savefig(path+"ds2_train.png", bbox_inches="tight")
plt.savefig(path+"ds2_train.pdf", bbox_inches="tight")
plt.savefig("ds2_train.png", bbox_inches="tight")
plt.savefig("ds2_train.pdf", bbox_inches="tight")
plt.show()
df_test
# plt.figure(12,12)
plt.plot(col1,col8, label='argmax > 0.5')
plt.plot(col1,col9, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.title("On Testing set")
plt.show()
plt.plot(col1,col10, label ="focus_true_pred_true ")
plt.plot(col1,col11, label ="focus_false_pred_true ")
plt.plot(col1,col12, label ="focus_true_pred_false ")
plt.plot(col1,col13, label ="focus_false_pred_false ")
plt.title("On Testing set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.show()
plt.figure(figsize=(6,5))
plt.plot(col1,np.array(col10)/10, label ="FTPT")
plt.plot(col1,np.array(col11)/10, label ="FFPT")
plt.plot(col1,np.array(col12)/10, label ="FTPF")
plt.plot(col1,np.array(col13)/10, label ="FFPF")
plt.title("Dataset2 - SDC On Test set")
plt.grid()
# plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.legend()
plt.xlabel("epochs", fontsize=14, fontweight = 'bold')
plt.ylabel("percentage test data", fontsize=14, fontweight = 'bold')
plt.savefig(path+"ds2_test.png", bbox_inches="tight")
plt.savefig(path+"ds2_test.pdf", bbox_inches="tight")
plt.savefig("ds2_test.png", bbox_inches="tight")
plt.savefig("ds2_test.pdf", bbox_inches="tight")
plt.show()
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 1000 train images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
correct = 0
total = 0
count = 0
flag = 1
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
for j in range(labels.size(0)):
focus = torch.argmax(alphas[j])
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true += 1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false += 1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false += 1
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 1000 test images: %d %%' % (
100 * correct / total))
print("total correct", correct)
print("total train set images", total)
print("focus_true_pred_true %d =============> FTPT : %d %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) )
print("focus_false_pred_true %d =============> FFPT : %d %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) )
print("focus_true_pred_false %d =============> FTPF : %d %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) )
print("focus_false_pred_false %d =============> FFPF : %d %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) )
print("argmax_more_than_half ==================> ",argmax_more_than_half)
print("argmax_less_than_half ==================> ",argmax_less_than_half)
correct = 0
total = 0
with torch.no_grad():
for data in train_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 1000 train images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
inputs, labels , fore_idx = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"), labels.to("cuda")
alphas, avg_images = focus_net(inputs)
outputs = classify(avg_images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 1000 test images: %d %%' % ( 100 * correct / total))
print("total correct", correct)
print("total train set images", total)
###Output
_____no_output_____ |
encode-wold-10x-index.ipynb | ###Markdown
Preparing to submit wold stranded samples....This got a little messy because a couple libraries needed a top up.
###Code
import os
import sys
import requests
import pandas
import paramiko
import json
from IPython import display
from pathlib import Path
from curation_common import *
from htsworkflow.submission.encoded import DCCValidator
from htsworkflow.submission.encoded import Document
from htsworkflow.submission.aws_submission import run_aws_cp
GCATDIR = os.path.expanduser('~/src/gcat')
if GCATDIR not in sys.path:
sys.path.append(GCATDIR)
import gcat
# live server & control file
server = ENCODED('www.encodeproject.org')
spreadsheet_name = '10X Wold Submission objects'
engine=None
#engine='odf'
# test server & datafile
#server = ENCODED('test.encodedcc.org')
#spreadsheet_name = os.path.expanduser('~diane/woldlab/ENCODE/C1-encode3-limb-2017-testserver.ods')
server.load_netrc()
validator = DCCValidator(server)
award = 'UM1HG009443'
###Output
_____no_output_____
###Markdown
Submit Documents
###Code
cellranger_uuid = '/documents/aca954d8-6133-428c-8b4a-2172dfa4066b/'
cellranger = Document(
os.path.expanduser('~/woldlab/ENCODE/cellranger_mkref.pdf'),
'pipeline protocol',
'Cell Ranger index building',
)
body = cellranger.create_if_needed(server, cellranger_uuid, validator)
if '@graph' in body:
print(body['@graph'][0]['@id'])
else:
print(body['@id'])
###Output
/documents/aca954d8-6133-428c-8b4a-2172dfa4066b/
###Markdown
Register Reference
###Code
book = gcat.get_file(spreadsheet_name, fmt='pandas_excel')
references = book.parse('Reference', header=0)
created = server.post_sheet('/references/',
references,
verbose=True,
dry_run=False,
validator=validator)
print(len(created))
created[0]['@id'], created[0]['uuid']
###Output
_____no_output_____
###Markdown
Register Libraries
###Code
print(spreadsheet_name)
libraries = pandas.read_excel(spreadsheet_name, sheet_name='Library', header=0, engine=engine)
created = server.post_sheet('/libraries/',
libraries,
verbose=True,
dry_run=True,
validator=validator)
print(len(created))
if created:
libraries.to_excel('/dev/shm/libraries.xlsx', index=False)
###Output
_____no_output_____
###Markdown
Register Experiments
###Code
print(server.server)
experiments = pandas.read_excel(spreadsheet_name, sheet_name='Experiment', header=0, engine=engine)
experiments = experiments[experiments['accession'] != 'barbara approval needed']
created = server.post_sheet('/experiments/',
experiments,
verbose=True,
dry_run=True,
validator=validator)
print(len(created))
if created:
experiments.to_excel('/dev/shm/experiments.xlsx', index=False)
###Output
_____no_output_____
###Markdown
Register Replicates Check Reference Files
###Code
book = gcat.get_file(spreadsheet_name, fmt='pandas_excel')
files = book.parse('ReferenceFile', header=0)
created = server.post_sheet('/files/', files,
verbose=True,
dry_run=True,
validator=validator)
print(len(created))
###Output
1
|
Project2/Bopt_UCB.ipynb | ###Markdown
ベイズ最適化入門 https://github.com/Ma-sa-ue/practice/blob/master/machine%20learning(python)/bayeisan_optimization.ipynb The original code is based on python2. A few modifications to fit it to python3 are needed.
###Code
%matplotlib inline
%run ../common/homemade_GPR.py
%run ../common/homemade_BO.py
import sys
import matplotlib.pyplot as plt
np.random.seed(seed=123)
#Define data, supervised data
def x2y(x):
f = 40.0*np.sin(x/1.0) - (0.3*(x+6.0))**2 - (0.2*(x-4.0))**2 - 1.0*np.abs(x+2.0) + np.random.normal(0,1,1)
return f
#
xmin = -20
xmax = 20
Nx = 1000
x = np.linspace(xmin, xmax, Nx)
y = list(map(x2y,x)) #for python3
y = np.array(y)
plt.plot(x, y) #### plot true data
plt.show()
#Define GPR and Bayesian opt.
GPR = Gaussian_Process_Regression(alpha = 1.0e-8)
#GPR.a1_RBF = 0.0
typical_scale=0.1
GPR.a1_RBF = 1.0
GPR.a2_RBF = typical_scale**2
GPR.a1_exp = 0.0
GPR.a2_exp = typical_scale
GPR.a1_const = 0.0
print(GPR.a1_RBF, GPR.a2_RBF, GPR.a1_exp, GPR.a2_exp, GPR.a1_const)
#
BO = Bayesian_opt()
#BO.acqui_name = 'EI'
#BO.acqui_name = 'PI'
BO.acqui_name = 'UCB'
print('# The choice of acquisition function: ',BO.acqui_name)
#Definition of array as the initial condition
x_sample_init = np.array([])
y_sample_init = np.array([])
Ninitial = 2
for i in range(Ninitial):
x_point = np.random.uniform(xmin,xmax) #Initial point is randomely chosen
x_sample_init = np.append(x_sample_init,x_point)
y_point = x2y(x_point)
y_sample_init = np.append(y_sample_init,y_point)
#
Nepoch = 16 #Number of optimization
nplotevery = Nepoch//16 #Plot the results in every this number
mean, std, x_point, y_point, maxval_list = DO_BO(GPR, BO, x2y, x, x_sample_init, y_sample_init, Nepoch, nplotevery, answer_is_there=True)
plt.figure()
plt.plot(maxval_list)
plt.grid()
plt.show()
###Output
epoch = 0 , x_point, maxval = 2.902902902902902, 10.752429826046175
epoch = 1 , x_point, maxval = -20.0, 10.752429826046175
epoch = 2 , x_point, maxval = -4.984984984984985, 32.67958961214537
epoch = 3 , x_point, maxval = -10.95095095095095, 32.67958961214537
epoch = 4 , x_point, maxval = 20.0, 32.67958961214537
epoch = 5 , x_point, maxval = -15.195195195195195, 32.67958961214537
epoch = 6 , x_point, maxval = 15.315315315315317, 32.67958961214537
epoch = 7 , x_point, maxval = -1.2212212212212208, 32.67958961214537
epoch = 8 , x_point, maxval = -7.987987987987989, 32.67958961214537
epoch = 9 , x_point, maxval = 5.305305305305303, 32.67958961214537
epoch = 10 , x_point, maxval = 17.63763763763764, 32.67958961214537
epoch = 11 , x_point, maxval = -13.033033033033032, 32.67958961214537
epoch = 12 , x_point, maxval = -17.7977977977978, 32.67958961214537
epoch = 13 , x_point, maxval = 12.952952952952955, 32.67958961214537
epoch = 14 , x_point, maxval = -3.343343343343342, 32.67958961214537
epoch = 15 , x_point, maxval = 1.1411411411411407, 32.67958961214537
|
doc/nb/LRIS_blue_notes.ipynb | ###Markdown
Notes on the LRIS Blue reduction
###Code
# imports
sys.path.append(os.path.abspath('/Users/xavier/local/Python/PYPIT/src'))
import arload as pyp_arload
import ario as pyp_ario
###Output
_____no_output_____
###Markdown
DetectorsNote: LRISb has employed different detectors. We may need tomake PYPIT backwards compatible. FITS file
###Code
fil = '/Users/xavier/PYPIT/LRIS_blue/Raw/b150910_2033.fits.gz'
hdu = fits.open(fil)
hdu.info()
head0['OBSTYPE']
head0 = hdu[0].header
head0
#head0['DATE']
plt.clf()
plt.imshow(hdu[1].data)
plt.show()
###Output
_____no_output_____
###Markdown
Display Raw LRIS image in Ginga
###Code
### Need to port readmhdufits
head0
reload(pyp_ario)
img, head = pyp_ario.read_lris('/Users/xavier/PYPIT/LRIS_blue/Raw/b150910_2070.fits',TRIM=True)
xdb.ximshow(img)
import subprocess
subprocess.call(["touch", "dum.fil"])
b = 'as'
'{1:s}'.format(b)
range(1,5)
tmp = np.ones((10,20))
tmp[0:1,:].shape
###Output
_____no_output_____ |
content/ch-ex/ex3.ipynb | ###Markdown
Building the Best AND Gate
###Code
from qiskit import *
from qiskit.tools.visualization import plot_histogram
from qiskit.providers.aer import noise
import numpy as np
###Output
_____no_output_____
###Markdown
In Problem Set 1, you made an AND gate with quantum gates. This time you'll do the same again, but for a real device. Using real devices gives you two major constraints to deal with. One is the connectivity, and the other is noise.The connectivity tells you what `cx` gates it is possible to do perform directly. For example, the device `ibmq_5_tenerife` has five qubits numbered from 0 to 4. It has a connectivity defined by
###Code
coupling_map = [[1, 0], [2, 0], [2, 1], [3, 2], [3, 4], [4, 2]]
###Output
_____no_output_____
###Markdown
Here the `[1,0]` tells us that we can implement a `cx` with qubit 1 as control and qubit 0 as target, the `[2,0]` tells us we can have qubit 2 as control and 0 as target, and so on. The are the `cx` gates that the device can implement directly.The 'noise' of a device is the collective effects of all the things that shouldn't happen, but nevertheless do happen. Noise results in the output not always having the result we expect. There is noise associated with all processes in a quantum circuit: preparing the initial states, applying gates and measuring the output. For the gates, noise levels can vary between different gates and between different qubits. The `cx` gates are typically more noisy than any single qubit gate.We can also simulate noise using a noise model. And we can set the noise model based on measurements of the noise for a real device. The following noise model is based on `ibmq_5_tenerife`.
###Code
noise_dict = {'errors': [{'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0004721766167523067, 0.0004721766167523067, 0.0004721766167523067, 0.9985834701497431], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.000901556048412383, 0.000901556048412383, 0.000901556048412383, 0.9972953318547628], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0011592423249461303, 0.0011592423249461303, 0.0011592423249461303, 0.9965222730251616], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0009443532335046134, 0.0009443532335046134, 0.0009443532335046134, 0.9971669402994862], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.001803112096824766, 0.001803112096824766, 0.001803112096824766, 0.9945906637095256], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0023184846498922607, 0.0023184846498922607, 0.0023184846498922607, 0.9930445460503232], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.9672573379090872], 'gate_qubits': [[1, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.9699888805021712], 'gate_qubits': [[2, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.9627184072576159], 'gate_qubits': [[2, 1]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.9437457618579164], 'gate_qubits': [[3, 2]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.9339816349935997], 'gate_qubits': [[3, 4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.9307167621063416], 'gate_qubits': [[4, 2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9372499999999999, 0.06275000000000008], [0.06275000000000008, 0.9372499999999999]], 'gate_qubits': [[0]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9345, 0.0655], [0.0655, 0.9345]], 'gate_qubits': [[1]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.97075, 0.029249999999999998], [0.029249999999999998, 0.97075]], 'gate_qubits': [[2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9742500000000001, 0.02574999999999994], [0.02574999999999994, 0.9742500000000001]], 'gate_qubits': [[3]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.8747499999999999, 0.12525000000000008], [0.12525000000000008, 0.8747499999999999]], 'gate_qubits': [[4]]}], 'x90_gates': []}
noise_model = noise.noise_model.NoiseModel.from_dict( noise_dict )
###Output
_____no_output_____
###Markdown
Running directly on the device requires you to have an IBMQ account, and for you to sign in to it within your program. In order to not worry about all this, we'll instead use a simulation of the 5 qubit device defined by the constraints set above.
###Code
qr = QuantumRegister(5, 'qr')
cr = ClassicalRegister(1, 'cr')
backend = Aer.get_backend('qasm_simulator')
###Output
_____no_output_____
###Markdown
We now define the `AND` function. This has a few differences to the version in Exercise 1. Firstly, it is defined on a 5 qubit circuit, so you'll need to decide which of the 5 qubits are used to encode `input1`, `input2` and the output. Secondly, the output is a histogram of the number of times that each output is found when the process is repeated over 10000 samples.
###Code
def AND (input1,input2, q_1=0,q_2=1,q_out=2):
# The keyword q_1 specifies the qubit used to encode input1
# The keyword q_2 specifies qubit used to encode input2
# The keyword q_out specifies qubit to be as output
qc = QuantumCircuit(qr, cr)
# prepare input on qubits q1 and q2
if input1=='1':
qc.x( qr[ q_1 ] )
if input2=='1':
qc.x( qr[ q_2 ] )
qc.ccx(qr[ q_1 ],qr[ q_2 ],qr[ q_out ]) # the AND just needs a c
qc.measure(qr[ q_out ],cr[0]) # output from qubit 1 is measured
# the circuit is run on a simulator, but we do it so that the noise and connectivity of Tenerife are also reproduced
job = execute(qc, backend, shots=10000, noise_model=noise_model,
coupling_map=coupling_map,
basis_gates=noise_model.basis_gates)
output = job.result().get_counts()
return output
###Output
_____no_output_____
###Markdown
For example, here are the results when both inputs are `0`.
###Code
result = AND('0','0')
print( result )
plot_histogram( result )
###Output
{'0': 9046, '1': 954}
###Markdown
We'll compare across all results to find the most unreliable.
###Code
worst = 1
for input1 in ['0','1']:
for input2 in ['0','1']:
print('\nProbability of correct answer for inputs',input1,input2)
prob = AND(input1,input2, q_1=0,q_2=1,q_out=2)[str(int( input1=='1' and input2=='1' ))]/10000
print( prob )
worst = min(worst,prob)
print('\nThe lowest of these probabilities was',worst)
###Output
Probability of correct answer for inputs 0 0
0.9013
Probability of correct answer for inputs 0 1
0.8989
Probability of correct answer for inputs 1 0
###Markdown
Building the Best AND Gate
###Code
from qiskit import *
from qiskit.tools.visualization import plot_histogram
from qiskit.providers.aer import noise
import numpy as np
###Output
_____no_output_____
###Markdown
In Problem Set 1, you made an AND gate with quantum gates. This time you'll do the same again, but for a real device. Using real devices gives you two major constraints to deal with. One is the connectivity, and the other is noise.The connectivity tells you what `cx` gates it is possible to do perform directly. For example, the device `ibmq_5_tenerife` has five qubits numbered from 0 to 4. It has a connectivity defined by
###Code
coupling_map = [[1, 0], [2, 0], [2, 1], [3, 2], [3, 4], [4, 2]]
###Output
_____no_output_____
###Markdown
Here the `[1,0]` tells us that we can implement a `cx` with qubit 1 as control and qubit 0 as target, the `[2,0]` tells us we can have qubit 2 as control and 0 as target, and so on. The are the `cx` gates that the device can implement directly.The 'noise' of a device is the collective effects of all the things that shouldn't happen, but nevertheless do happen. Noise results in the output not always having the result we expect. There is noise associated with all processes in a quantum circuit: preparing the initial states, applying gates and measuring the output. For the gates, noise levels can vary between different gates and between different qubits. The `cx` gates are typically more noisy than any single qubit gate.We can also simulate noise using a noise model. And we can set the noise model based on measurements of the noise for a real device. The following noise model is based on `ibmq_5_tenerife`.
###Code
noise_dict = {'errors': [{'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0004721766167523067, 0.0004721766167523067, 0.0004721766167523067, 0.9985834701497431], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.000901556048412383, 0.000901556048412383, 0.000901556048412383, 0.9972953318547628], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0011592423249461303, 0.0011592423249461303, 0.0011592423249461303, 0.9965222730251616], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0009443532335046134, 0.0009443532335046134, 0.0009443532335046134, 0.9971669402994862], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.001803112096824766, 0.001803112096824766, 0.001803112096824766, 0.9945906637095256], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0023184846498922607, 0.0023184846498922607, 0.0023184846498922607, 0.9930445460503232], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.9672573379090872], 'gate_qubits': [[1, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.9699888805021712], 'gate_qubits': [[2, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.9627184072576159], 'gate_qubits': [[2, 1]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.9437457618579164], 'gate_qubits': [[3, 2]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.9339816349935997], 'gate_qubits': [[3, 4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.9307167621063416], 'gate_qubits': [[4, 2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9372499999999999, 0.06275000000000008], [0.06275000000000008, 0.9372499999999999]], 'gate_qubits': [[0]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9345, 0.0655], [0.0655, 0.9345]], 'gate_qubits': [[1]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.97075, 0.029249999999999998], [0.029249999999999998, 0.97075]], 'gate_qubits': [[2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9742500000000001, 0.02574999999999994], [0.02574999999999994, 0.9742500000000001]], 'gate_qubits': [[3]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.8747499999999999, 0.12525000000000008], [0.12525000000000008, 0.8747499999999999]], 'gate_qubits': [[4]]}], 'x90_gates': []}
noise_model = noise.noise_model.NoiseModel.from_dict( noise_dict )
###Output
_____no_output_____
###Markdown
Running directly on the device requires you to have an IBMQ account, and for you to sign in to it within your program. In order to not worry about all this, we'll instead use a simulation of the 5 qubit device defined by the constraints set above.
###Code
qr = QuantumRegister(5, 'qr')
cr = ClassicalRegister(1, 'cr')
backend = Aer.get_backend('qasm_simulator')
###Output
_____no_output_____
###Markdown
We now define the `AND` function. This has a few differences to the version in Exercise 1. Firstly, it is defined on a 5 qubit circuit, so you'll need to decide which of the 5 qubits are used to encode `input1`, `input2` and the output. Secondly, the output is a histogram of the number of times that each output is found when the process is repeated over 10000 samples.
###Code
def AND (input1,input2, q_1=0,q_2=1,q_out=2):
# The keyword q_1 specifies the qubit used to encode input1
# The keyword q_2 specifies qubit used to encode input2
# The keyword q_out specifies qubit to be as output
qc = QuantumCircuit(qr, cr)
# prepare input on qubits q1 and q2
if input1=='1':
qc.x( qr[ q_1 ] )
if input2=='1':
qc.x( qr[ q_2 ] )
qc.ccx(qr[ q_1 ],qr[ q_2 ],qr[ q_out ]) # the AND just needs a c
qc.measure(qr[ q_out ],cr[0]) # output from qubit 1 is measured
# the circuit is run on a simulator, but we do it so that the noise and connectivity of Tenerife are also reproduced
job = execute(qc, backend, shots=10000, noise_model=noise_model,
coupling_map=coupling_map,
basis_gates=noise_model.basis_gates)
output = job.result().get_counts()
return output
###Output
_____no_output_____
###Markdown
For example, here are the results when both inputs are `0`.
###Code
result = AND('0','0')
print( result )
plot_histogram( result )
###Output
{'1': 991, '0': 9009}
###Markdown
We'll compare across all results to find the most unreliable.
###Code
worst = 1
for input1 in ['0','1']:
for input2 in ['0','1']:
print('\nProbability of correct answer for inputs',input1,input2)
prob = AND(input1,input2, q_1=0,q_2=1,q_out=2)[str(int( input1=='1' and input2=='1' ))]/10000
print( prob )
worst = min(worst,prob)
print('\nThe lowest of these probabilities was',worst)
###Output
Probability of correct answer for inputs 0 0
0.9035
Probability of correct answer for inputs 0 1
0.8964
Probability of correct answer for inputs 1 0
0.8972
Probability of correct answer for inputs 1 1
0.9019
The lowest of these probabilities was 0.8964
###Markdown
Building the Best AND GateLet's import everything:
###Code
from qiskit import *
from qiskit.tools.visualization import plot_histogram
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
from qiskit.providers.aer import noise
import numpy as np
###Output
_____no_output_____
###Markdown
In Problem Set 1, you made an AND gate with quantum gates. This time you'll do the same again, but for a real device. Using real devices gives you two major constraints to deal with. One is the connectivity, and the other is noise.The connectivity tells you what `cx` gates it is possible to do perform directly. For example, the device `ibmq_5_tenerife` has five qubits numbered from 0 to 4. It has a connectivity defined by
###Code
coupling_map = [[1, 0], [2, 0], [2, 1], [3, 2], [3, 4], [4, 2]]
###Output
_____no_output_____
###Markdown
Here the `[1,0]` tells us that we can implement a `cx` with qubit 1 as control and qubit 0 as target, the `[2,0]` tells us we can have qubit 2 as control and 0 as target, and so on. The are the `cx` gates that the device can implement directly.The 'noise' of a device is the collective effects of all the things that shouldn't happen, but nevertheless do happen. Noise results in the output not always having the result we expect. There is noise associated with all processes in a quantum circuit: preparing the initial states, applying gates and measuring the output. For the gates, noise levels can vary between different gates and between different qubits. The `cx` gates are typically more noisy than any single qubit gate.We can also simulate noise using a noise model. And we can set the noise model based on measurements of the noise for a real device. The following noise model is based on `ibmq_5_tenerife`.
###Code
noise_dict = {'errors': [{'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0004721766167523067, 0.0004721766167523067, 0.0004721766167523067, 0.9985834701497431], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.000901556048412383, 0.000901556048412383, 0.000901556048412383, 0.9972953318547628], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0011592423249461303, 0.0011592423249461303, 0.0011592423249461303, 0.9965222730251616], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0009443532335046134, 0.0009443532335046134, 0.0009443532335046134, 0.9971669402994862], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.001803112096824766, 0.001803112096824766, 0.001803112096824766, 0.9945906637095256], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0023184846498922607, 0.0023184846498922607, 0.0023184846498922607, 0.9930445460503232], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.9672573379090872], 'gate_qubits': [[1, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.9699888805021712], 'gate_qubits': [[2, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.9627184072576159], 'gate_qubits': [[2, 1]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.9437457618579164], 'gate_qubits': [[3, 2]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.9339816349935997], 'gate_qubits': [[3, 4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.9307167621063416], 'gate_qubits': [[4, 2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9372499999999999, 0.06275000000000008], [0.06275000000000008, 0.9372499999999999]], 'gate_qubits': [[0]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9345, 0.0655], [0.0655, 0.9345]], 'gate_qubits': [[1]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.97075, 0.029249999999999998], [0.029249999999999998, 0.97075]], 'gate_qubits': [[2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9742500000000001, 0.02574999999999994], [0.02574999999999994, 0.9742500000000001]], 'gate_qubits': [[3]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.8747499999999999, 0.12525000000000008], [0.12525000000000008, 0.8747499999999999]], 'gate_qubits': [[4]]}], 'x90_gates': []}
noise_model = noise.noise_model.NoiseModel.from_dict( noise_dict )
###Output
_____no_output_____
###Markdown
Running directly on the device requires you to have an IBMQ account, and for you to sign in to it within your program. In order to not worry about all this, we'll instead use a simulation of the 5 qubit device defined by the constraints set above.
###Code
qr = QuantumRegister(5, 'qr')
cr = ClassicalRegister(1, 'cr')
backend = Aer.get_backend('qasm_simulator')
###Output
_____no_output_____
###Markdown
We now define the `AND` function. This has a few differences to the version in Exercise 1. Firstly, it is defined on a 5 qubit circuit, so you'll need to decide which of the 5 qubits are used to encode `input1`, `input2` and the output. Secondly, the output is a histogram of the number of times that each output is found when the process is repeated over 10000 samples.
###Code
def AND (input1,input2, q_1=0,q_2=1,q_out=2):
# The keyword q_1 specifies the qubit used to encode input1
# The keyword q_2 specifies qubit used to encode input2
# The keyword q_out specifies qubit to be as output
qc = QuantumCircuit(qr, cr)
# prepare input on qubits q1 and q2
if input1=='1':
qc.x( qr[ q_1 ] )
if input2=='1':
qc.x( qr[ q_2 ] )
qc.ccx(qr[ q_1 ],qr[ q_2 ],qr[ q_out ]) # the AND just needs a c
qc.measure(qr[ q_out ],cr[0]) # output from qubit 1 is measured
# the circuit is run on a simulator, but we do it so that the noise and connectivity of Tenerife are also reproduced
job = execute(qc, backend, shots=10000, noise_model=noise_model,
coupling_map=coupling_map,
basis_gates=noise_model.basis_gates)
output = job.result().get_counts()
return output
###Output
_____no_output_____
###Markdown
For example, here are the results when both inputs are `0`.
###Code
result = AND('0','0')
print( result )
plot_histogram( result )
###Output
{'1': 991, '0': 9009}
###Markdown
We'll compare across all results to find the most unreliable.
###Code
worst = 1
for input1 in ['0','1']:
for input2 in ['0','1']:
print('\nProbability of correct answer for inputs',input1,input2)
prob = AND(input1,input2, q_1=0,q_2=1,q_out=2)[str(int( input1=='1' and input2=='1' ))]/10000
print( prob )
worst = min(worst,prob)
print('\nThe lowest of these probabilities was',worst)
###Output
Probability of correct answer for inputs 0 0
0.9035
Probability of correct answer for inputs 0 1
0.8978
Probability of correct answer for inputs 1 0
###Markdown
The `AND` function above uses the `ccx` gate the implement the required operation. But you now know how to make your own. Find a way to implement an `AND` for which the lowest of the above probabilities is better than for a simple `ccx`.
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Building the Best AND GateLet's import everything:
###Code
from qiskit import *
from qiskit.tools.visualization import plot_histogram
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
from qiskit.providers.aer import noise
import numpy as np
###Output
_____no_output_____
###Markdown
In Problem Set 1, you made an AND gate with quantum gates. This time you'll do the same again, but for a real device. Using real devices gives you two major constraints to deal with. One is the connectivity, and the other is noise.The connectivity tells you what `cx` gates it is possible to do perform directly. For example, the device `ibmq_5_tenerife` has five qubits numbered from 0 to 4. It has a connectivity defined by
###Code
coupling_map = [[1, 0], [2, 0], [2, 1], [3, 2], [3, 4], [4, 2]]
###Output
_____no_output_____
###Markdown
Here the `[1,0]` tells us that we can implement a `cx` with qubit 1 as control and qubit 0 as target, the `[2,0]` tells us we can have qubit 2 as control and 0 as target, and so on. The are the `cx` gates that the device can implement directly.The 'noise' of a device is the collective effects of all the things that shouldn't happen, but nevertheless do happen. Noise results in the output not always having the result we expect. There is noise associated with all processes in a quantum circuit: preparing the initial states, applying gates and measuring the output. For the gates, noise levels can vary between different gates and between different qubits. The `cx` gates are typically more noisy than any single qubit gate.We can also simulate noise using a noise model. And we can set the noise model based on measurements of the noise for a real device. The following noise model is based on `ibmq_5_tenerife`.
###Code
noise_dict = {'errors': [{'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0004721766167523067, 0.0004721766167523067, 0.0004721766167523067, 0.9985834701497431], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.000901556048412383, 0.000901556048412383, 0.000901556048412383, 0.9972953318547628], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0011592423249461303, 0.0011592423249461303, 0.0011592423249461303, 0.9965222730251616], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0009443532335046134, 0.0009443532335046134, 0.0009443532335046134, 0.9971669402994862], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.001803112096824766, 0.001803112096824766, 0.001803112096824766, 0.9945906637095256], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0023184846498922607, 0.0023184846498922607, 0.0023184846498922607, 0.9930445460503232], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.9672573379090872], 'gate_qubits': [[1, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.9699888805021712], 'gate_qubits': [[2, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.9627184072576159], 'gate_qubits': [[2, 1]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.9437457618579164], 'gate_qubits': [[3, 2]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.9339816349935997], 'gate_qubits': [[3, 4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.9307167621063416], 'gate_qubits': [[4, 2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9372499999999999, 0.06275000000000008], [0.06275000000000008, 0.9372499999999999]], 'gate_qubits': [[0]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9345, 0.0655], [0.0655, 0.9345]], 'gate_qubits': [[1]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.97075, 0.029249999999999998], [0.029249999999999998, 0.97075]], 'gate_qubits': [[2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9742500000000001, 0.02574999999999994], [0.02574999999999994, 0.9742500000000001]], 'gate_qubits': [[3]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.8747499999999999, 0.12525000000000008], [0.12525000000000008, 0.8747499999999999]], 'gate_qubits': [[4]]}], 'x90_gates': []}
noise_model = noise.noise_model.NoiseModel.from_dict( noise_dict )
###Output
_____no_output_____
###Markdown
Running directly on the device requires you to have an IBMQ account, and for you to sign in to it within your program. In order to not worry about all this, we'll instead use a simulation of the 5 qubit device defined by the constraints set above.
###Code
qr = QuantumRegister(5, 'qr')
cr = ClassicalRegister(1, 'cr')
backend = Aer.get_backend('qasm_simulator')
###Output
_____no_output_____
###Markdown
We now define the `AND` function. This has a few differences to the version in Exercise 1. Firstly, it is defined on a 5 qubit circuit, so you'll need to decide which of the 5 qubits are used to encode `input1`, `input2` and the output. Secondly, the output is a histogram of the number of times that each output is found when the process is repeated over 10000 samples.
###Code
def AND (input1,input2, q_1=0,q_2=1,q_out=2):
# The keyword q_1 specifies the qubit used to encode input1
# The keyword q_2 specifies qubit used to encode input2
# The keyword q_out specifies qubit to be as output
qc = QuantumCircuit(qr, cr)
# prepare input on qubits q1 and q2
if input1=='1':
qc.x( qr[ q_1 ] )
if input2=='1':
qc.x( qr[ q_2 ] )
qc.ccx(qr[ q_1 ],qr[ q_2 ],qr[ q_out ]) # the AND just needs a c
qc.measure(qr[ q_out ],cr[0]) # output from qubit 1 is measured
# the circuit is run on a simulator, but we do it so that the noise and connectivity of Tenerife are also reproduced
job = execute(qc, backend, shots=10000, noise_model=noise_model,
coupling_map=coupling_map,
basis_gates=noise_model.basis_gates)
output = job.result().get_counts()
return output
###Output
_____no_output_____
###Markdown
For example, here are the results when both inputs are `0`.
###Code
result = AND('0','0')
print( result )
plot_histogram( result )
###Output
{'1': 980, '0': 9020}
###Markdown
We'll compare across all results to find the most unreliable.
###Code
worst = 1
for input1 in ['0','1']:
for input2 in ['0','1']:
print('\nProbability of correct answer for inputs',input1,input2)
prob = AND(input1,input2, q_1=0,q_2=1,q_out=2)[str(int( input1=='1' and input2=='1' ))]/10000
print( prob )
worst = min(worst,prob)
print('\nThe lowest of these probabilities was',worst)
###Output
Probability of correct answer for inputs 0 0
0.9014
Probability of correct answer for inputs 0 1
0.8994
Probability of correct answer for inputs 1 0
###Markdown
The `AND` function above uses the `ccx` gate the implement the required operation. But you now know how to make your own. Find a way to implement an `AND` for which the lowest of the above probabilities is better than for a simple `ccx`.
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Building the Best AND GateLet's import everything:
###Code
from qiskit import *
from qiskit.tools.visualization import plot_histogram
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
from qiskit.providers.aer import noise
import numpy as np
###Output
_____no_output_____
###Markdown
In Problem Set 1, you made an AND gate with quantum gates. This time you'll do the same again, but for a real device. Using real devices gives you two major constraints to deal with. One is the connectivity, and the other is noise.The connectivity tells you what `cx` gates it is possible to do perform directly. For example, the device `ibmq_5_tenerife` has five qubits numbered from 0 to 4. It has a connectivity defined by
###Code
coupling_map = [[1, 0], [2, 0], [2, 1], [3, 2], [3, 4], [4, 2]]
###Output
_____no_output_____
###Markdown
Here the `[1,0]` tells us that we can implement a `cx` with qubit 1 as control and qubit 0 as target, the `[2,0]` tells us we can have qubit 2 as control and 0 as target, and so on. The are the `cx` gates that the device can implement directly.The 'noise' of a device is the collective effects of all the things that shouldn't happen, but nevertheless do happen. Noise results in the output not always having the result we expect. There is noise associated with all processes in a quantum circuit: preparing the initial states, applying gates and measuring the output. For the gates, noise levels can vary between different gates and between different qubits. The `cx` gates are typically more noisy than any single qubit gate.We can also simulate noise using a noise model. And we can set the noise model based on measurements of the noise for a real device. The following noise model is based on `ibmq_5_tenerife`.
###Code
noise_dict = {'errors': [{'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0004721766167523067, 0.0004721766167523067, 0.0004721766167523067, 0.9985834701497431], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.000901556048412383, 0.000901556048412383, 0.000901556048412383, 0.9972953318547628], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0011592423249461303, 0.0011592423249461303, 0.0011592423249461303, 0.9965222730251616], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0009443532335046134, 0.0009443532335046134, 0.0009443532335046134, 0.9971669402994862], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.001803112096824766, 0.001803112096824766, 0.001803112096824766, 0.9945906637095256], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0023184846498922607, 0.0023184846498922607, 0.0023184846498922607, 0.9930445460503232], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.9672573379090872], 'gate_qubits': [[1, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.9699888805021712], 'gate_qubits': [[2, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.9627184072576159], 'gate_qubits': [[2, 1]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.9437457618579164], 'gate_qubits': [[3, 2]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.9339816349935997], 'gate_qubits': [[3, 4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.9307167621063416], 'gate_qubits': [[4, 2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9372499999999999, 0.06275000000000008], [0.06275000000000008, 0.9372499999999999]], 'gate_qubits': [[0]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9345, 0.0655], [0.0655, 0.9345]], 'gate_qubits': [[1]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.97075, 0.029249999999999998], [0.029249999999999998, 0.97075]], 'gate_qubits': [[2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9742500000000001, 0.02574999999999994], [0.02574999999999994, 0.9742500000000001]], 'gate_qubits': [[3]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.8747499999999999, 0.12525000000000008], [0.12525000000000008, 0.8747499999999999]], 'gate_qubits': [[4]]}], 'x90_gates': []}
noise_model = noise.noise_model.NoiseModel.from_dict( noise_dict )
###Output
_____no_output_____
###Markdown
Running directly on the device requires you to have an IBMQ account, and for you to sign in to it within your program. In order to not worry about all this, we'll instead use a simulation of the 5 qubit device defined by the constraints set above.
###Code
qr = QuantumRegister(5, 'qr')
cr = ClassicalRegister(1, 'cr')
backend = Aer.get_backend('qasm_simulator')
###Output
_____no_output_____
###Markdown
We now define the `AND` function. This has a few differences to the version in Exercise 1. Firstly, it is defined on a 5 qubit circuit, so you'll need to decide which of the 5 qubits are used to encode `input1`, `input2` and the output. Secondly, the output is a histogram of the number of times that each output is found when the process is repeated over 10000 samples.
###Code
def AND (input1,input2, q_1=0,q_2=1,q_out=2):
# The keyword q_1 specifies the qubit used to encode input1
# The keyword q_2 specifies qubit used to encode input2
# The keyword q_out specifies qubit to be as output
qc = QuantumCircuit(qr, cr)
# prepare input on qubits q1 and q2
if input1=='1':
qc.x( qr[ q_1 ] )
if input2=='1':
qc.x( qr[ q_2 ] )
qc.ccx(qr[ q_1 ],qr[ q_2 ],qr[ q_out ]) # the AND just needs a c
qc.measure(qr[ q_out ],cr[0]) # output from qubit 1 is measured
# the circuit is run on a simulator, but we do it so that the noise and connectivity of Tenerife are also reproduced
job = execute(qc, backend, shots=10000, noise_model=noise_model,
coupling_map=coupling_map,
basis_gates=noise_model.basis_gates)
output = job.result().get_counts()
return output
###Output
_____no_output_____
###Markdown
For example, here are the results when both inputs are `0`.
###Code
result = AND('0','0')
print( result )
plot_histogram( result )
###Output
{'0': 8998, '1': 1002}
###Markdown
We'll compare across all results to find the most unreliable.
###Code
worst = 1
for input1 in ['0','1']:
for input2 in ['0','1']:
print('\nProbability of correct answer for inputs',input1,input2)
prob = AND(input1,input2, q_1=0,q_2=1,q_out=2)[str(int( input1=='1' and input2=='1' ))]/10000
print( prob )
worst = min(worst,prob)
print('\nThe lowest of these probabilities was',worst)
###Output
Probability of correct answer for inputs 0 0
0.9003
Probability of correct answer for inputs 0 1
0.8983
Probability of correct answer for inputs 1 0
###Markdown
The `AND` function above uses the `ccx` gate the implement the required operation. But you now know how to make your own. Find a way to implement an `AND` for which the lowest of the above probabilities is better than for a simple `ccx`.
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Building the Best AND GateLet's import everything:
###Code
from qiskit import *
from qiskit.tools.visualization import plot_histogram
from qiskit.providers.aer import noise
import numpy as np
###Output
_____no_output_____
###Markdown
In Problem Set 1, you made an AND gate with quantum gates. This time you'll do the same again, but for a real device. Using real devices gives you two major constraints to deal with. One is the connectivity, and the other is noise.The connectivity tells you what `cx` gates it is possible to do perform directly. For example, the device `ibmq_5_tenerife` has five qubits numbered from 0 to 4. It has a connectivity defined by
###Code
coupling_map = [[1, 0], [2, 0], [2, 1], [3, 2], [3, 4], [4, 2]]
###Output
_____no_output_____
###Markdown
Here the `[1,0]` tells us that we can implement a `cx` with qubit 1 as control and qubit 0 as target, the `[2,0]` tells us we can have qubit 2 as control and 0 as target, and so on. These are the `cx` gates that the device can implement directly.The 'noise' of a device is the collective effects of all the things that shouldn't happen, but nevertheless do happen. Noise results in the output not always having the result we expect. There is noise associated with all processes in a quantum circuit: preparing the initial states, applying gates and measuring the output. For the gates, noise levels can vary between different gates and between different qubits. The `cx` gates are typically more noisy than any single qubit gate.We can also simulate noise using a noise model. And we can set the noise model based on measurements of the noise for a real device. The following noise model is based on `ibmq_5_tenerife`.
###Code
noise_dict = {'errors': [{'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0004721766167523067, 0.0004721766167523067, 0.0004721766167523067, 0.9985834701497431], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.000901556048412383, 0.000901556048412383, 0.000901556048412383, 0.9972953318547628], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0011592423249461303, 0.0011592423249461303, 0.0011592423249461303, 0.9965222730251616], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0009443532335046134, 0.0009443532335046134, 0.0009443532335046134, 0.9971669402994862], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.001803112096824766, 0.001803112096824766, 0.001803112096824766, 0.9945906637095256], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0023184846498922607, 0.0023184846498922607, 0.0023184846498922607, 0.9930445460503232], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.9672573379090872], 'gate_qubits': [[1, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.9699888805021712], 'gate_qubits': [[2, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.9627184072576159], 'gate_qubits': [[2, 1]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.9437457618579164], 'gate_qubits': [[3, 2]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.9339816349935997], 'gate_qubits': [[3, 4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.9307167621063416], 'gate_qubits': [[4, 2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9372499999999999, 0.06275000000000008], [0.06275000000000008, 0.9372499999999999]], 'gate_qubits': [[0]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9345, 0.0655], [0.0655, 0.9345]], 'gate_qubits': [[1]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.97075, 0.029249999999999998], [0.029249999999999998, 0.97075]], 'gate_qubits': [[2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9742500000000001, 0.02574999999999994], [0.02574999999999994, 0.9742500000000001]], 'gate_qubits': [[3]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.8747499999999999, 0.12525000000000008], [0.12525000000000008, 0.8747499999999999]], 'gate_qubits': [[4]]}], 'x90_gates': []}
noise_model = noise.noise_model.NoiseModel.from_dict( noise_dict )
###Output
_____no_output_____
###Markdown
Running directly on the device requires you to have an IBMQ account, and for you to sign in to it within your program. In order to not worry about all this, we'll instead use a simulation of the 5 qubit device defined by the constraints set above.
###Code
qr = QuantumRegister(5, 'qr')
cr = ClassicalRegister(1, 'cr')
backend = Aer.get_backend('aer_simulator')
###Output
_____no_output_____
###Markdown
We now define the `AND` function. This has a few differences to the version in Exercise 1. Firstly, it is defined on a 5 qubit circuit, so you'll need to decide which of the 5 qubits are used to encode `input1`, `input2` and the output. Secondly, the output is a histogram of the number of times that each output is found when the process is repeated over 10000 samples.
###Code
def AND (input1,input2, q_1=0,q_2=1,q_out=2):
# The keyword q_1 specifies the qubit used to encode input1
# The keyword q_2 specifies qubit used to encode input2
# The keyword q_out specifies qubit to be as output
qc = QuantumCircuit(qr, cr)
# prepare input on qubits q1 and q2
if input1=='1':
qc.x( qr[ q_1 ] )
if input2=='1':
qc.x( qr[ q_2 ] )
qc.ccx(qr[ q_1 ],qr[ q_2 ],qr[ q_out ]) # the AND just needs a c
qc.measure(qr[ q_out ],cr[0]) # output from qubit 1 is measured
# the circuit is run on a simulator, but we do it so that the noise and connectivity of Tenerife are also reproduced
job = execute(qc, backend, shots=10000, noise_model=noise_model,
coupling_map=coupling_map,
basis_gates=noise_model.basis_gates)
output = job.result().get_counts()
return output
###Output
_____no_output_____
###Markdown
For example, here are the results when both inputs are `0`.
###Code
result = AND('0','0')
print( result )
plot_histogram( result )
###Output
{'1': 991, '0': 9009}
###Markdown
We'll compare across all results to find the most unreliable.
###Code
worst = 1
for input1 in ['0','1']:
for input2 in ['0','1']:
print('\nProbability of correct answer for inputs',input1,input2)
prob = AND(input1,input2, q_1=0,q_2=1,q_out=2)[str(int( input1=='1' and input2=='1' ))]/10000
print( prob )
worst = min(worst,prob)
print('\nThe lowest of these probabilities was',worst)
###Output
Probability of correct answer for inputs 0 0
0.9035
Probability of correct answer for inputs 0 1
0.8978
Probability of correct answer for inputs 1 0
0.8995
Probability of correct answer for inputs 1 1
0.9046
The lowest of these probabilities was 0.8978
###Markdown
The `AND` function above uses the `ccx` gate the implement the required operation. But you now know how to make your own. Find a way to implement an `AND` for which the lowest of the above probabilities is better than for a simple `ccx`.
###Code
import qiskit.tools.jupyter
%qiskit_version_table
###Output
_____no_output_____
###Markdown
Building the Best AND GateLet's import everything:
###Code
from qiskit import *
from qiskit.tools.visualization import plot_histogram
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
from qiskit.providers.aer import noise
import numpy as np
###Output
_____no_output_____
###Markdown
In Problem Set 1, you made an AND gate with quantum gates. This time you'll do the same again, but for a real device. Using real devices gives you two major constraints to deal with. One is the connectivity, and the other is noise.The connectivity tells you what `cx` gates it is possible to do perform directly. For example, the device `ibmq_5_tenerife` has five qubits numbered from 0 to 4. It has a connectivity defined by
###Code
coupling_map = [[1, 0], [2, 0], [2, 1], [3, 2], [3, 4], [4, 2]]
###Output
_____no_output_____
###Markdown
Here the `[1,0]` tells us that we can implement a `cx` with qubit 1 as control and qubit 0 as target, the `[2,0]` tells us we can have qubit 2 as control and 0 as target, and so on. The are the `cx` gates that the device can implement directly.The 'noise' of a device is the collective effects of all the things that shouldn't happen, but nevertheless do happen. Noise results in the output not always having the result we expect. There is noise associated with all processes in a quantum circuit: preparing the initial states, applying gates and measuring the output. For the gates, noise levels can vary between different gates and between different qubits. The `cx` gates are typically more noisy than any single qubit gate.We can also simulate noise using a noise model. And we can set the noise model based on measurements of the noise for a real device. The following noise model is based on `ibmq_5_tenerife`.
###Code
noise_dict = {'errors': [{'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0004721766167523067, 0.0004721766167523067, 0.0004721766167523067, 0.9985834701497431], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.000901556048412383, 0.000901556048412383, 0.000901556048412383, 0.9972953318547628], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0011592423249461303, 0.0011592423249461303, 0.0011592423249461303, 0.9965222730251616], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0009443532335046134, 0.0009443532335046134, 0.0009443532335046134, 0.9971669402994862], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.001803112096824766, 0.001803112096824766, 0.001803112096824766, 0.9945906637095256], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0023184846498922607, 0.0023184846498922607, 0.0023184846498922607, 0.9930445460503232], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.9672573379090872], 'gate_qubits': [[1, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.9699888805021712], 'gate_qubits': [[2, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.9627184072576159], 'gate_qubits': [[2, 1]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.9437457618579164], 'gate_qubits': [[3, 2]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.9339816349935997], 'gate_qubits': [[3, 4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.9307167621063416], 'gate_qubits': [[4, 2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9372499999999999, 0.06275000000000008], [0.06275000000000008, 0.9372499999999999]], 'gate_qubits': [[0]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9345, 0.0655], [0.0655, 0.9345]], 'gate_qubits': [[1]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.97075, 0.029249999999999998], [0.029249999999999998, 0.97075]], 'gate_qubits': [[2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9742500000000001, 0.02574999999999994], [0.02574999999999994, 0.9742500000000001]], 'gate_qubits': [[3]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.8747499999999999, 0.12525000000000008], [0.12525000000000008, 0.8747499999999999]], 'gate_qubits': [[4]]}], 'x90_gates': []}
noise_model = noise.noise_model.NoiseModel.from_dict( noise_dict )
###Output
_____no_output_____
###Markdown
Running directly on the device requires you to have an IBMQ account, and for you to sign in to it within your program. In order to not worry about all this, we'll instead use a simulation of the 5 qubit device defined by the constraints set above.
###Code
qr = QuantumRegister(5, 'qr')
cr = ClassicalRegister(1, 'cr')
backend = Aer.get_backend('qasm_simulator')
###Output
_____no_output_____
###Markdown
We now define the `AND` function. This has a few differences to the version in Exercise 1. Firstly, it is defined on a 5 qubit circuit, so you'll need to decide which of the 5 qubits are used to encode `input1`, `input2` and the output. Secondly, the output is a histogram of the number of times that each output is found when the process is repeated over 10000 samples.
###Code
def AND (input1,input2, q_1=0,q_2=1,q_out=2):
# The keyword q_1 specifies the qubit used to encode input1
# The keyword q_2 specifies qubit used to encode input2
# The keyword q_out specifies qubit to be as output
qc = QuantumCircuit(qr, cr)
# prepare input on qubits q1 and q2
if input1=='1':
qc.x( qr[ q_1 ] )
if input2=='1':
qc.x( qr[ q_2 ] )
qc.ccx(qr[ q_1 ],qr[ q_2 ],qr[ q_out ]) # the AND just needs a c
qc.measure(qr[ q_out ],cr[0]) # output from qubit 1 is measured
# the circuit is run on a simulator, but we do it so that the noise and connectivity of Tenerife are also reproduced
job = execute(qc, backend, shots=10000, noise_model=noise_model,
coupling_map=coupling_map,
basis_gates=noise_model.basis_gates)
output = job.result().get_counts()
return output
###Output
_____no_output_____
###Markdown
For example, here are the results when both inputs are `0`.
###Code
result = AND('0','0')
print( result )
plot_histogram( result )
###Output
{'0': 8996, '1': 1004}
###Markdown
We'll compare across all results to find the most unreliable.
###Code
worst = 1
for input1 in ['0','1']:
for input2 in ['0','1']:
print('\nProbability of correct answer for inputs',input1,input2)
prob = AND(input1,input2, q_1=0,q_2=1,q_out=2)[str(int( input1=='1' and input2=='1' ))]/10000
print( prob )
worst = min(worst,prob)
print('\nThe lowest of these probabilities was',worst)
###Output
Probability of correct answer for inputs 0 0
0.8994
Probability of correct answer for inputs 0 1
0.9029
Probability of correct answer for inputs 1 0
0.8942
Probability of correct answer for inputs 1 1
0.8973
The lowest of these probabilities was 0.8942
###Markdown
The `AND` function above uses the `ccx` gate the implement the required operation. But you now know how to make your own. Find a way to implement an `AND` for which the lowest of the above probabilities is better than for a simple `ccx`.
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Building the Best AND GateLet's import everything:
###Code
from qiskit import *
from qiskit.tools.visualization import plot_histogram
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
from qiskit.providers.aer import noise
import numpy as np
###Output
_____no_output_____
###Markdown
In Problem Set 1, you made an AND gate with quantum gates. This time you'll do the same again, but for a real device. Using real devices gives you two major constraints to deal with. One is the connectivity, and the other is noise.The connectivity tells you what `cx` gates it is possible to do perform directly. For example, the device `ibmq_5_tenerife` has five qubits numbered from 0 to 4. It has a connectivity defined by
###Code
coupling_map = [[1, 0], [2, 0], [2, 1], [3, 2], [3, 4], [4, 2]]
###Output
_____no_output_____
###Markdown
Here the `[1,0]` tells us that we can implement a `cx` with qubit 1 as control and qubit 0 as target, the `[2,0]` tells us we can have qubit 2 as control and 0 as target, and so on. These are the `cx` gates that the device can implement directly.The 'noise' of a device is the collective effects of all the things that shouldn't happen, but nevertheless do happen. Noise results in the output not always having the result we expect. There is noise associated with all processes in a quantum circuit: preparing the initial states, applying gates and measuring the output. For the gates, noise levels can vary between different gates and between different qubits. The `cx` gates are typically more noisy than any single qubit gate.We can also simulate noise using a noise model. And we can set the noise model based on measurements of the noise for a real device. The following noise model is based on `ibmq_5_tenerife`.
###Code
noise_dict = {'errors': [{'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0004721766167523067, 0.0004721766167523067, 0.0004721766167523067, 0.9985834701497431], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.000901556048412383, 0.000901556048412383, 0.000901556048412383, 0.9972953318547628], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0011592423249461303, 0.0011592423249461303, 0.0011592423249461303, 0.9965222730251616], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0009443532335046134, 0.0009443532335046134, 0.0009443532335046134, 0.9971669402994862], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.001803112096824766, 0.001803112096824766, 0.001803112096824766, 0.9945906637095256], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0023184846498922607, 0.0023184846498922607, 0.0023184846498922607, 0.9930445460503232], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.9672573379090872], 'gate_qubits': [[1, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.9699888805021712], 'gate_qubits': [[2, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.9627184072576159], 'gate_qubits': [[2, 1]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.9437457618579164], 'gate_qubits': [[3, 2]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.9339816349935997], 'gate_qubits': [[3, 4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.9307167621063416], 'gate_qubits': [[4, 2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9372499999999999, 0.06275000000000008], [0.06275000000000008, 0.9372499999999999]], 'gate_qubits': [[0]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9345, 0.0655], [0.0655, 0.9345]], 'gate_qubits': [[1]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.97075, 0.029249999999999998], [0.029249999999999998, 0.97075]], 'gate_qubits': [[2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9742500000000001, 0.02574999999999994], [0.02574999999999994, 0.9742500000000001]], 'gate_qubits': [[3]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.8747499999999999, 0.12525000000000008], [0.12525000000000008, 0.8747499999999999]], 'gate_qubits': [[4]]}], 'x90_gates': []}
noise_model = noise.noise_model.NoiseModel.from_dict( noise_dict )
###Output
_____no_output_____
###Markdown
Running directly on the device requires you to have an IBMQ account, and for you to sign in to it within your program. In order to not worry about all this, we'll instead use a simulation of the 5 qubit device defined by the constraints set above.
###Code
qr = QuantumRegister(5, 'qr')
cr = ClassicalRegister(1, 'cr')
backend = Aer.get_backend('qasm_simulator')
###Output
_____no_output_____
###Markdown
We now define the `AND` function. This has a few differences to the version in Exercise 1. Firstly, it is defined on a 5 qubit circuit, so you'll need to decide which of the 5 qubits are used to encode `input1`, `input2` and the output. Secondly, the output is a histogram of the number of times that each output is found when the process is repeated over 10000 samples.
###Code
def AND (input1,input2, q_1=0,q_2=1,q_out=2):
# The keyword q_1 specifies the qubit used to encode input1
# The keyword q_2 specifies qubit used to encode input2
# The keyword q_out specifies qubit to be as output
qc = QuantumCircuit(qr, cr)
# prepare input on qubits q1 and q2
if input1=='1':
qc.x( qr[ q_1 ] )
if input2=='1':
qc.x( qr[ q_2 ] )
qc.ccx(qr[ q_1 ],qr[ q_2 ],qr[ q_out ]) # the AND just needs a c
qc.measure(qr[ q_out ],cr[0]) # output from qubit 1 is measured
# the circuit is run on a simulator, but we do it so that the noise and connectivity of Tenerife are also reproduced
job = execute(qc, backend, shots=10000, noise_model=noise_model,
coupling_map=coupling_map,
basis_gates=noise_model.basis_gates)
output = job.result().get_counts()
return output
###Output
_____no_output_____
###Markdown
For example, here are the results when both inputs are `0`.
###Code
result = AND('0','0')
print( result )
plot_histogram( result )
###Output
{'1': 991, '0': 9009}
###Markdown
We'll compare across all results to find the most unreliable.
###Code
worst = 1
for input1 in ['0','1']:
for input2 in ['0','1']:
print('\nProbability of correct answer for inputs',input1,input2)
prob = AND(input1,input2, q_1=0,q_2=1,q_out=2)[str(int( input1=='1' and input2=='1' ))]/10000
print( prob )
worst = min(worst,prob)
print('\nThe lowest of these probabilities was',worst)
###Output
Probability of correct answer for inputs 0 0
0.9035
Probability of correct answer for inputs 0 1
0.8978
Probability of correct answer for inputs 1 0
###Markdown
The `AND` function above uses the `ccx` gate the implement the required operation. But you now know how to make your own. Find a way to implement an `AND` for which the lowest of the above probabilities is better than for a simple `ccx`.
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Building the Best AND GateLet's import everything:
###Code
from qiskit import *
from qiskit.tools.visualization import plot_histogram
%config InlineBackend.figure_format = 'svg' # Makes the images look nice
from qiskit.providers.aer import noise
import numpy as np
###Output
_____no_output_____
###Markdown
In Problem Set 1, you made an AND gate with quantum gates. This time you'll do the same again, but for a real device. Using real devices gives you two major constraints to deal with. One is the connectivity, and the other is noise.The connectivity tells you what `cx` gates it is possible to do perform directly. For example, the device `ibmq_5_tenerife` has five qubits numbered from 0 to 4. It has a connectivity defined by
###Code
coupling_map = [[1, 0], [2, 0], [2, 1], [3, 2], [3, 4], [4, 2]]
###Output
_____no_output_____
###Markdown
Here the `[1,0]` tells us that we can implement a `cx` with qubit 1 as control and qubit 0 as target, the `[2,0]` tells us we can have qubit 2 as control and 0 as target, and so on. The are the `cx` gates that the device can implement directly.The 'noise' of a device is the collective effects of all the things that shouldn't happen, but nevertheless do happen. Noise results in the output not always having the result we expect. There is noise associated with all processes in a quantum circuit: preparing the initial states, applying gates and measuring the output. For the gates, noise levels can vary between different gates and between different qubits. The `cx` gates are typically more noisy than any single qubit gate.We can also simulate noise using a noise model. And we can set the noise model based on measurements of the noise for a real device. The following noise model is based on `ibmq_5_tenerife`.
###Code
noise_dict = {'errors': [{'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0004721766167523067, 0.0004721766167523067, 0.0004721766167523067, 0.9985834701497431], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.000901556048412383, 0.000901556048412383, 0.000901556048412383, 0.9972953318547628], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0011592423249461303, 0.0011592423249461303, 0.0011592423249461303, 0.9965222730251616], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0009443532335046134, 0.0009443532335046134, 0.0009443532335046134, 0.9971669402994862], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.001803112096824766, 0.001803112096824766, 0.001803112096824766, 0.9945906637095256], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0023184846498922607, 0.0023184846498922607, 0.0023184846498922607, 0.9930445460503232], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.9672573379090872], 'gate_qubits': [[1, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.9699888805021712], 'gate_qubits': [[2, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.9627184072576159], 'gate_qubits': [[2, 1]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.9437457618579164], 'gate_qubits': [[3, 2]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.9339816349935997], 'gate_qubits': [[3, 4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.9307167621063416], 'gate_qubits': [[4, 2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9372499999999999, 0.06275000000000008], [0.06275000000000008, 0.9372499999999999]], 'gate_qubits': [[0]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9345, 0.0655], [0.0655, 0.9345]], 'gate_qubits': [[1]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.97075, 0.029249999999999998], [0.029249999999999998, 0.97075]], 'gate_qubits': [[2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9742500000000001, 0.02574999999999994], [0.02574999999999994, 0.9742500000000001]], 'gate_qubits': [[3]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.8747499999999999, 0.12525000000000008], [0.12525000000000008, 0.8747499999999999]], 'gate_qubits': [[4]]}], 'x90_gates': []}
noise_model = noise.noise_model.NoiseModel.from_dict( noise_dict )
###Output
_____no_output_____
###Markdown
Running directly on the device requires you to have an IBMQ account, and for you to sign in to it within your program. In order to not worry about all this, we'll instead use a simulation of the 5 qubit device defined by the constraints set above.
###Code
qr = QuantumRegister(5, 'qr')
cr = ClassicalRegister(1, 'cr')
backend = Aer.get_backend('qasm_simulator')
###Output
_____no_output_____
###Markdown
We now define the `AND` function. This has a few differences to the version in Exercise 1. Firstly, it is defined on a 5 qubit circuit, so you'll need to decide which of the 5 qubits are used to encode `input1`, `input2` and the output. Secondly, the output is a histogram of the number of times that each output is found when the process is repeated over 10000 samples.
###Code
def AND (input1,input2, q_1=0,q_2=1,q_out=2):
# The keyword q_1 specifies the qubit used to encode input1
# The keyword q_2 specifies qubit used to encode input2
# The keyword q_out specifies qubit to be as output
qc = QuantumCircuit(qr, cr)
# prepare input on qubits q1 and q2
if input1=='1':
qc.x( qr[ q_1 ] )
if input2=='1':
qc.x( qr[ q_2 ] )
qc.ccx(qr[ q_1 ],qr[ q_2 ],qr[ q_out ]) # the AND just needs a c
qc.measure(qr[ q_out ],cr[0]) # output from qubit 1 is measured
# the circuit is run on a simulator, but we do it so that the noise and connectivity of Tenerife are also reproduced
job = execute(qc, backend, shots=10000, noise_model=noise_model,
coupling_map=coupling_map,
basis_gates=noise_model.basis_gates)
output = job.result().get_counts()
return output
###Output
_____no_output_____
###Markdown
For example, here are the results when both inputs are `0`.
###Code
result = AND('0','0')
print( result )
plot_histogram( result )
###Output
{'1': 980, '0': 9020}
###Markdown
We'll compare across all results to find the most unreliable.
###Code
worst = 1
for input1 in ['0','1']:
for input2 in ['0','1']:
print('\nProbability of correct answer for inputs',input1,input2)
prob = AND(input1,input2, q_1=0,q_2=1,q_out=2)[str(int( input1=='1' and input2=='1' ))]/10000
print( prob )
worst = min(worst,prob)
print('\nThe lowest of these probabilities was',worst)
###Output
Probability of correct answer for inputs 0 0
0.9014
Probability of correct answer for inputs 0 1
0.8994
Probability of correct answer for inputs 1 0
0.8963
Probability of correct answer for inputs 1 1
0.8963
The lowest of these probabilities was 0.8963
###Markdown
The `AND` function above uses the `ccx` gate the implement the required operation. But you now know how to make your own. Find a way to implement an `AND` for which the lowest of the above probabilities is better than for a simple `ccx`.
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Building the Best AND GateLet's import everything:
###Code
from qiskit import *
from qiskit.tools.visualization import plot_histogram
from qiskit.providers.aer import noise
import numpy as np
###Output
_____no_output_____
###Markdown
In Problem Set 1, you made an AND gate with quantum gates. This time you'll do the same again, but for a real device. Using real devices gives you two major constraints to deal with. One is the connectivity, and the other is noise.The connectivity tells you what `cx` gates it is possible to do perform directly. For example, the device `ibmq_5_tenerife` has five qubits numbered from 0 to 4. It has a connectivity defined by
###Code
coupling_map = [[1, 0], [2, 0], [2, 1], [3, 2], [3, 4], [4, 2]]
###Output
_____no_output_____
###Markdown
Here the `[1,0]` tells us that we can implement a `cx` with qubit 1 as control and qubit 0 as target, the `[2,0]` tells us we can have qubit 2 as control and 0 as target, and so on. These are the `cx` gates that the device can implement directly.The 'noise' of a device is the collective effects of all the things that shouldn't happen, but nevertheless do happen. Noise results in the output not always having the result we expect. There is noise associated with all processes in a quantum circuit: preparing the initial states, applying gates and measuring the output. For the gates, noise levels can vary between different gates and between different qubits. The `cx` gates are typically more noisy than any single qubit gate.We can also simulate noise using a noise model. And we can set the noise model based on measurements of the noise for a real device. The following noise model is based on `ibmq_5_tenerife`.
###Code
noise_dict = {'errors': [{'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0004721766167523067, 0.0004721766167523067, 0.0004721766167523067, 0.9985834701497431], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0005151090708174488, 0.0005151090708174488, 0.0005151090708174488, 0.9984546727875476], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.000901556048412383, 0.000901556048412383, 0.000901556048412383, 0.9972953318547628], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u2'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0011592423249461303, 0.0011592423249461303, 0.0011592423249461303, 0.9965222730251616], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0009443532335046134, 0.0009443532335046134, 0.0009443532335046134, 0.9971669402994862], 'gate_qubits': [[0]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[1]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0010302181416348977, 0.0010302181416348977, 0.0010302181416348977, 0.9969093455750953], 'gate_qubits': [[2]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.001803112096824766, 0.001803112096824766, 0.001803112096824766, 0.9945906637095256], 'gate_qubits': [[3]]}, {'type': 'qerror', 'operations': ['u3'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0023184846498922607, 0.0023184846498922607, 0.0023184846498922607, 0.9930445460503232], 'gate_qubits': [[4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.002182844139394187, 0.9672573379090872], 'gate_qubits': [[1, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.0020007412998552473, 0.9699888805021712], 'gate_qubits': [[2, 0]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.002485439516158936, 0.9627184072576159], 'gate_qubits': [[2, 1]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.0037502825428055767, 0.9437457618579164], 'gate_qubits': [[3, 2]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.004401224333760022, 0.9339816349935997], 'gate_qubits': [[3, 4]]}, {'type': 'qerror', 'operations': ['cx'], 'instructions': [[{'name': 'x', 'qubits': [0]}], [{'name': 'y', 'qubits': [0]}], [{'name': 'z', 'qubits': [0]}], [{'name': 'x', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'x', 'qubits': [1]}], [{'name': 'y', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'y', 'qubits': [1]}], [{'name': 'z', 'qubits': [1]}], [{'name': 'x', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'y', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'z', 'qubits': [0]}, {'name': 'z', 'qubits': [1]}], [{'name': 'id', 'qubits': [0]}]], 'probabilities': [0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.0046188825262438934, 0.9307167621063416], 'gate_qubits': [[4, 2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9372499999999999, 0.06275000000000008], [0.06275000000000008, 0.9372499999999999]], 'gate_qubits': [[0]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9345, 0.0655], [0.0655, 0.9345]], 'gate_qubits': [[1]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.97075, 0.029249999999999998], [0.029249999999999998, 0.97075]], 'gate_qubits': [[2]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.9742500000000001, 0.02574999999999994], [0.02574999999999994, 0.9742500000000001]], 'gate_qubits': [[3]]}, {'type': 'roerror', 'operations': ['measure'], 'probabilities': [[0.8747499999999999, 0.12525000000000008], [0.12525000000000008, 0.8747499999999999]], 'gate_qubits': [[4]]}], 'x90_gates': []}
noise_model = noise.noise_model.NoiseModel.from_dict( noise_dict )
###Output
_____no_output_____
###Markdown
Running directly on the device requires you to have an IBMQ account, and for you to sign in to it within your program. In order to not worry about all this, we'll instead use a simulation of the 5 qubit device defined by the constraints set above.
###Code
qr = QuantumRegister(5, 'qr')
cr = ClassicalRegister(1, 'cr')
backend = Aer.get_backend('qasm_simulator')
###Output
_____no_output_____
###Markdown
We now define the `AND` function. This has a few differences to the version in Exercise 1. Firstly, it is defined on a 5 qubit circuit, so you'll need to decide which of the 5 qubits are used to encode `input1`, `input2` and the output. Secondly, the output is a histogram of the number of times that each output is found when the process is repeated over 10000 samples.
###Code
def AND (input1,input2, q_1=0,q_2=1,q_out=2):
# The keyword q_1 specifies the qubit used to encode input1
# The keyword q_2 specifies qubit used to encode input2
# The keyword q_out specifies qubit to be as output
qc = QuantumCircuit(qr, cr)
# prepare input on qubits q1 and q2
if input1=='1':
qc.x( qr[ q_1 ] )
if input2=='1':
qc.x( qr[ q_2 ] )
qc.ccx(qr[ q_1 ],qr[ q_2 ],qr[ q_out ]) # the AND just needs a c
qc.measure(qr[ q_out ],cr[0]) # output from qubit 1 is measured
# the circuit is run on a simulator, but we do it so that the noise and connectivity of Tenerife are also reproduced
job = execute(qc, backend, shots=10000, noise_model=noise_model,
coupling_map=coupling_map,
basis_gates=noise_model.basis_gates)
output = job.result().get_counts()
return output
###Output
_____no_output_____
###Markdown
For example, here are the results when both inputs are `0`.
###Code
result = AND('0','0')
print( result )
plot_histogram( result )
###Output
{'1': 991, '0': 9009}
###Markdown
We'll compare across all results to find the most unreliable.
###Code
worst = 1
for input1 in ['0','1']:
for input2 in ['0','1']:
print('\nProbability of correct answer for inputs',input1,input2)
prob = AND(input1,input2, q_1=0,q_2=1,q_out=2)[str(int( input1=='1' and input2=='1' ))]/10000
print( prob )
worst = min(worst,prob)
print('\nThe lowest of these probabilities was',worst)
###Output
Probability of correct answer for inputs 0 0
0.9035
Probability of correct answer for inputs 0 1
0.8978
Probability of correct answer for inputs 1 0
0.8995
Probability of correct answer for inputs 1 1
0.9046
The lowest of these probabilities was 0.8978
###Markdown
The `AND` function above uses the `ccx` gate the implement the required operation. But you now know how to make your own. Find a way to implement an `AND` for which the lowest of the above probabilities is better than for a simple `ccx`.
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____ |
Day_022_HW.ipynb | ###Markdown
作業 : (Kaggle)鐵達尼生存預測https://www.kaggle.com/c/titanic 作業1* 觀察範例,在房價預測中調整標籤編碼(Label Encoder) / 獨熱編碼 (One Hot Encoder) 方式, 對於線性迴歸以及梯度提升樹兩種模型,何者影響比較大? - answer: 獨熱編碼(One Hot Encoder)
###Code
# 做完特徵工程前的所有準備 (與前範例相同)
import pandas as pd
import numpy as np
import copy, time
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import LabelEncoder
data_path = 'data/data2/'
df_train = pd.read_csv(data_path + 'titanic_train.csv')
df_test = pd.read_csv(data_path + 'titanic_test.csv')
train_Y = df_train['Survived']
ids = df_test['PassengerId']
df_train = df_train.drop(['PassengerId', 'Survived'] , axis=1)
df_test = df_test.drop(['PassengerId'] , axis=1)
df = pd.concat([df_train,df_test])
df.head()
#只取類別值 (object) 型欄位, 存於 object_features 中
object_features = []
for dtype, feature in zip(df.dtypes, df.columns):
if dtype == 'object':
object_features.append(feature)
print(f'{len(object_features)} Object Features : {object_features}\n')
# 只留類別型欄位
df = df[object_features]
df = df.fillna('None')
train_num = train_Y.shape[0]
df.head()
###Output
5 Object Features : ['Name', 'Sex', 'Ticket', 'Cabin', 'Embarked']
###Markdown
作業2* 鐵達尼號例題中,標籤編碼 / 獨熱編碼又分別對預測結果有何影響? (Hint : 參考今日範例)
###Code
# 標籤編碼 + 羅吉斯迴歸
"""
Your Code Here
"""
df_temp = pd.DataFrame()
for colname in df.columns:
df_temp[colname] = LabelEncoder().fit_transform(df[colname])
train_X = df_temp[:train_num]
estimator = LogisticRegression(solver='lbfgs')
cross_val_score(estimator, train_X, train_Y, cv=5).mean()
# 獨熱編碼 + 羅吉斯迴歸
"""
Your Code Here
"""
df_temp = pd.get_dummies(data=df)
train_X = df_temp[:train_num]
estimator = LogisticRegression(solver='lbfgs')
cross_val_score(estimator, train_X, train_Y, cv=5).mean()
###Output
_____no_output_____
###Markdown
作業 : (Kaggle)鐵達尼生存預測精簡版 https://www.kaggle.com/c/titanic [作業目標]- 試著不依賴說明, 只依照下列程式碼回答下列問題, 初步理解什麼是"特徵工程"的區塊 [作業重點]- 試著不依賴註解, 以之前所學, 回答下列問題 作業1 Q1: 下列A~E五個程式區塊中,哪一塊是特徵工程? A1: C 程式區塊為特徵工程。 作業2 Q2: 對照程式區塊 B 與 C 的結果,請問那些欄位屬於"類別型欄位"? (回答欄位英文名稱即可) A2: Name, Sex, Ticket, Cabin, Embarked 為類別型欄位 作業3 Q3: 續上題,請問哪個欄位是"目標值"? A3: Survived 為目標值
###Code
# 程式區塊 A
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
df_train = pd.read_csv('titanic_train.csv')
df_test = pd.read_csv('titanic_test.csv')
print('df_train shape: ', df_train.shape)
print('df_test shape: ', df_test.shape)
# 程式區塊 B
train_Y = df_train['Survived'] # extract train target data
ids = df_test['PassengerId'] # get test id data
df_train = df_train.drop(['PassengerId', 'Survived'] , axis=1) # drop inconsiderable train data
df_test = df_test.drop(['PassengerId'] , axis=1) # drop inconsiderable test data
df = pd.concat([df_train,df_test]) # features match btwn train and test
df.head()
df.dtypes.value_counts() # observe all types of features
df_backup = df.copy()
df = df_backup.copy()
# 程式區塊 C
LEncoder = LabelEncoder()
MMEncoder = MinMaxScaler()
for c in df.columns:
df[c] = df[c].fillna(-1) # replace NaN with -1
# if df[c].dtype == 'object':
# df[c] = LEncoder.fit_transform(list(df[c].values)) # label encoding
# df[c] = MMEncoder.fit_transform(df[c].values.reshape(-1, 1)) # match matrix shape
df.head()
for c in df.columns:
if df[c].dtype == 'object':
df[c] = LEncoder.fit_transform(list(df[c].values)) # label encoding
# df[c] = MMEncoder.fit_transform(df[c].values.reshape(-1, 1)) # match matrix shape
df.head()
for c in df.columns:
df[c] = MMEncoder.fit_transform(df[c].values.reshape(-1, 1)) # match matrix shape 1D to 2D
df.head()
# 程式區塊 D
train_num = train_Y.shape[0] # get length of target
train_X = df[:train_num] # restore concated df
test_X = df[train_num:] # restore concated df
from sklearn.linear_model import LogisticRegression
estimator = LogisticRegression()
estimator.fit(train_X, train_Y)
pred = estimator.predict(test_X)
# 程式區塊 E
sub = pd.DataFrame({'PassengerId': ids, 'Survived': pred})
sub.to_csv('titanic_baseline_by_semisu.csv', index=False)
###Output
_____no_output_____
###Markdown
作業 : (Kaggle)鐵達尼生存預測https://www.kaggle.com/c/titanic 作業1* 觀察範例,在房價預測中調整標籤編碼(Label Encoder) / 獨熱編碼 (One Hot Encoder) 方式, 對於線性迴歸以及梯度提升樹兩種模型,何者影響比較大?
###Code
# 做完特徵工程前的所有準備 (與前範例相同)
import pandas as pd
import numpy as np
import copy, time
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.preprocessing import LabelEncoder
data_path = 'data/'
df_train = pd.read_csv(data_path + 'titanic_train.csv')
df_test = pd.read_csv(data_path + 'titanic_test.csv')
train_Y = df_train['Survived']
ids = df_test['PassengerId']
df_train = df_train.drop(['PassengerId', 'Survived'] , axis=1)
df_test = df_test.drop(['PassengerId'] , axis=1)
df = pd.concat([df_train,df_test])
df.head()
#只取類別值 (object) 型欄位, 存於 object_features 中
object_features = []
for dtype, feature in zip(df.dtypes, df.columns):
if dtype == 'object':
object_features.append(feature)
print(f'{len(object_features)} Numeric Features : {object_features}\n')
# 只留類別型欄位
df = df[object_features]
df = df.fillna('None')
train_num = train_Y.shape[0]
df.head()
###Output
5 Numeric Features : ['Name', 'Sex', 'Ticket', 'Cabin', 'Embarked']
###Markdown
作業2* 鐵達尼號例題中,標籤編碼 / 獨熱編碼又分別對預測結果有何影響? (Hint : 參考今日範例)
###Code
# 標籤編碼 + 線性迴歸
df_temp = pd.DataFrame()
for c in df.columns:
df_temp[c] = LabelEncoder().fit_transform(df[c])
train_X = df_temp[:train_num]
estimator = LogisticRegression()
start = time.time()
print(f'shape : {train_X.shape}')
print(f'score : {cross_val_score(estimator, train_X, train_Y, cv=5).mean()}')
print(f'time : {time.time() - start} sec')
# 獨熱編碼 + 線性迴歸
df_temp = pd.get_dummies(df)
train_X = df_temp[:train_num]
estimator = LogisticRegression()
start = time.time()
print(f'shape : {train_X.shape}')
print(f'score : {cross_val_score(estimator, train_X, train_Y, cv=5).mean()}')
print(f'time : {time.time() - start} sec')
# 標籤編碼 + 梯度提升樹
df_temp = pd.DataFrame()
for c in df.columns:
df_temp[c] = LabelEncoder().fit_transform(df[c])
train_X = df_temp[:train_num]
estimator = GradientBoostingClassifier()
start = time.time()
print(f'shape : {train_X.shape}')
print(f'score : {cross_val_score(estimator, train_X, train_Y, cv=5).mean()}')
print(f'time : {time.time() - start} sec')
# 獨熱編碼 + 梯度提升樹
df_temp = pd.get_dummies(df)
train_X = df_temp[:train_num]
estimator = GradientBoostingClassifier()
start = time.time()
print(f'shape : {train_X.shape}')
print(f'score : {cross_val_score(estimator, train_X, train_Y, cv=5).mean()}')
print(f'time : {time.time() - start} sec')
###Output
shape : (891, 2429)
score : 0.8013030771473163
time : 12.999475479125977 sec
|
Mar22/Statistics/locationestimates.ipynb | ###Markdown
Sample Location Estimates--------------------------* [Refer Here](https://github.com/khajadatascienceR/DataScienceWithPython/blob/main/Mar22/Statistics/murderrate.csv) for the dataset
###Code
import pandas as pd
state_df = pd.read_csv('murderrate.csv')
state_df
###Output
_____no_output_____
###Markdown
Lets calculate location estimates
###Code
# mean of the population
state_df['Population'].mean()
# median of the population
state_df['Population'].median()
# trimmed mean of the Population
from scipy.stats import trim_mean
trim_mean(state_df['Population'], 0.15) # drops 15% data at both ends
import numpy as np
# weighted mean
np.average(state_df['Population'], weights=state_df['Murder Rate'])
###Output
_____no_output_____ |
notebooks/03_04_PCA_SkLearn.ipynb | ###Markdown
Feature engineering - PCAPCA with sklearn on the auto-mpg and Iris datasets *** Environment`conda activate sklearn-env`*** Goals- Run PCA- Observe explained variance- Observe the scatter plot of the PCA features*** Referenceshttps://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html Basic python imports
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.datasets import load_iris
# Make numpy printouts easier to read.
np.set_printoptions(precision=3, suppress=True)
###Output
_____no_output_____
###Markdown
Dataset load from CSV located on UCI website.http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data If the URL does not work the dataset can be loaded from the data folder `./data/auto-mpg.data`.
###Code
label = ''
dataset = None
if True :
label = 'MPG'
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(url, names=column_names,
na_values='?', comment='\t',
sep=' ', skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.sample(5)
else :
label = 'target'
data = load_iris(as_frame = True )
dataset = data.frame
dataset.head(2)
###Output
_____no_output_____
###Markdown
Dataset split- row base in test and train datasets- column base in features and labels
###Code
dataset = dataset.dropna().copy()
dataset.reset_index(drop=True, inplace=True)
train_dataset = dataset.sample(frac=0.8, random_state=0)
test_dataset = dataset.drop(train_dataset.index)
train_features = train_dataset.copy()
test_features = test_dataset.copy()
train_labels = train_features.pop(label)
train_labels.reset_index(drop=True, inplace=True)
test_labels = test_features.pop(label)
###Output
_____no_output_____
###Markdown
Standardize data
###Code
from sklearn.preprocessing import StandardScaler
scaled_features = StandardScaler().fit_transform(train_features)
###Output
_____no_output_____
###Markdown
PCA
###Code
from sklearn.decomposition import PCA
pca_transformer = PCA()
pca_result = pca_transformer.fit_transform(scaled_features)
labels = {
str(i): f"pca {i+1}"
for i, var in enumerate(pca_transformer.explained_variance_ratio_ * 100)
}
pca_df = pd.DataFrame(data = pca_result, columns = labels)
pca_df = pd.concat([pca_df, train_labels], axis=1)
pca_df.sample(10)
###Output
_____no_output_____
###Markdown
Explain and visualize output
###Code
print('Explained variance ratio:', pca_transformer.explained_variance_ratio_)
corr_orig = dataset.corr()
corr_pca = pca_df.corr()
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16,4))
ax1.set_title('PCA Features')
ax2.set_title('Original Features')
sns.color_palette("hls", 8)
# Generate a mask for the upper triangle
mask = np.triu(np.ones_like(corr_pca, dtype=bool))
sns.heatmap(corr_pca, annot=True, fmt='.2f', mask = mask, cmap="YlGnBu", xticklabels=corr_pca.columns.values,yticklabels=corr_pca.columns.values, ax = ax1)
mask = np.triu(np.ones_like(corr_orig, dtype=bool))
sns.heatmap(corr_orig, annot=True, fmt='.2f', mask = mask, cmap="YlGnBu", xticklabels=corr_orig.columns.values,yticklabels=corr_orig.columns.values, ax = ax2)
###Output
_____no_output_____
###Markdown
Plot "new" data
###Code
plt.scatter(pca_df['0'], pca_df['1'], c = pca_df[label])
plt.xlabel('PCA 1')
plt.ylabel('PCA 2')
plt.title(f'{label}')
plt.show()
###Output
_____no_output_____ |
07 - Mapping in matplotlib/0703 - Interlude - Color Maps.ipynb | ###Markdown
Create a Color MapperWe'll need a way of mapping unemployment rates to hot and cold colors, and what we really want is something with a super simple interface, i.e., we just want to call it and pass in a value and get back a color. Unfortunately, we just can't grab a color map from matplotlib and pass in an unemployment rate. The problem with doing this is that a color map expects a float value between 0 and 1 and our data ranges from roughly 2-30. So, every value in our data falls outside of the range of the color map, and so we end up with a map that is just black across all counties. Now, a color map also supports passing in integer values, and it maps those values to one of 256 colors. Given this option, we could simply call `int` on each value before passing it into the color map, but unfortunately, this creates a new problem. Since our data consists of values between 2 and 30, all of our values are on the low end of the spectrum when compared to the color map's range of 0-255. With this method, we end up with an extremely "watered down" map where all of the counties look a nearly identical shade of light yellow.The solution to both of these problems is to normalize each unemployment rate before passing it into our color map. Fortunately, matplotlib provides the `matploltlib.colors.Normalize` class to make this bit extremely easy. But, as I mentioned above, we want a simple interface that allows us to pass in a value and get a color. One way to accomplish this is through a callable object that encapsulates the creation of the normalization function. Again, matplotlib provides what we need to make this really easy. The `pyplot.cm.ScalarMappable` class is a mixin class for adding color map functionality to custom classes. We use this class below to create our `HeatMapper` helper class.The `HeatMapper` take a single parameter, a list of values, and creates a normalization function and calls
###Code
class HeatMapper(plt.cm.ScalarMappable):
"""A callable that maps cold colors to low values, and hot to high.
"""
def __init__(self, data=None):
norm = mpl.colors.Normalize(vmin=min(data), vmax=max(data))
cmap = plt.cm.hot_r
super(HeatMapper, self).__init__(norm, cmap)
def __call__(self, value):
return self.to_rgba(value)
###Output
_____no_output_____ |
cleared-demos/linear_least_squares/Image compression.ipynb | ###Markdown
Image CompressionCopyright (C) 2020 Andreas KloecknerMIT LicensePermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included inall copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS INTHE SOFTWARE.
###Code
import numpy as np
import matplotlib.pyplot as pt
from PIL import Image
with Image.open("andreas.jpeg").resize((500,500)) as img:
rgb_img = np.array(img)
rgb_img.shape
img = np.sum(rgb_img, axis=-1)
pt.imshow(img, cmap="gray")
u, sigma, vt = np.linalg.svd(img)
sigma
pt.plot(sigma)
compressed_img = (
sigma[0] * np.outer(u[:, 0], vt[0])
+ sigma[1] * np.outer(u[:, 1], vt[1])
+ sigma[2] * np.outer(u[:, 2], vt[2])
+ sigma[3] * np.outer(u[:, 3], vt[3])
+ sigma[4] * np.outer(u[:, 4], vt[4])
+ sigma[5] * np.outer(u[:, 5], vt[5])
)
pt.imshow(compressed_img, cmap="gray")
###Output
_____no_output_____
###Markdown
Image CompressionCopyright (C) 2020 Andreas KloecknerMIT LicensePermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included inall copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS INTHE SOFTWARE.
###Code
import numpy as np
import matplotlib.pyplot as pt
from PIL import Image
with Image.open("andreas.jpeg").resize((500,500)) as img:
rgb_img = np.array(img)
rgb_img.shape
img = np.sum(rgb_img, axis=-1)
pt.imshow(img, cmap="gray")
u, sigma, vt = np.linalg.svd(img)
sigma
pt.plot(sigma)
###Output
_____no_output_____ |
solutions/mid1/submissions/shibruce_172254_6241779_FINM 367 First Mid.ipynb | ###Markdown
Read the package True or False1. False: ALthough sharpe ratio is an indicator but you cannot over estimate the covaraince between the largest and smallest sharpe ratio. So we usually long the largest and short the second2. LETF can generate much higher return in long run 3. I suggest we use alpha to value that, because this is a short term investment and overall trends and mean is hard to proceed, it is safer to use intercept when estimating4. In sample and out of sample are both yes, the correlation between HDG and HFRI is more than 93%. It is even better to have Rolling Beta with In the sample. OUt of sample is also going well with rolling beta5.alpha is negative: It may be the case when the beta is really high but non-factorable part is relatively small. It did not contradict the fact that the hedge fund is doing great
###Code
import numpy as np
import pandas as pd
pd.options.display.float_format = "{:,.4f}".format
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
from sklearn.linear_model import LinearRegression
import statsmodels.api as sm
from sklearn.linear_model import LinearRegression
import scipy.stats
###Output
_____no_output_____
###Markdown
Performance Matrix: Calculate mean vol sharpe
###Code
# Need to be given the Sheet of data
def performanceMetrics(returns,annualization=1):
metrics = pd.DataFrame(index=returns.columns)
metrics['Mean'] = returns.mean() * annualization
metrics['Vol'] = returns.std() * np.sqrt(annualization)
metrics['Sharpe'] = (returns.mean() / returns.std()) * np.sqrt(annualization)
metrics['Min'] = returns.min()
metrics['Max'] = returns.max()
return metrics
###Output
_____no_output_____
###Markdown
Tangency Portfolio
###Code
def tangency_weights(returns,dropna=True,scale_cov=1):
if dropna:
returns = returns.dropna()
covmat_full = returns.cov()
covmat_diag = np.diag(np.diag(covmat_full))
covmat = scale_cov * covmat_full + (1-scale_cov) * covmat_diag
weights = np.linalg.solve(covmat,returns.mean())
weights = weights / weights.sum()
# weights1 = pd.Series(weights, index=returns.columns)
return pd.DataFrame(weights, index=returns.columns, columns={'Tangency'})
###Output
_____no_output_____
###Markdown
Display the Corr Matrix, showing largest, smallest Need to have data available.
###Code
# ### Make the diagonals NaN so we can find the highest and lowest pairwise correlations
def display_correlation(df,list_maxmin=True):
corrmat = df.corr()
#ignore self-correlation
corrmat[corrmat==1] = None
sns.heatmap(corrmat)
if list_maxmin:
#Drop 1,1
corr_rank = corrmat.unstack().sort_values().dropna()
#find min
pair_max = corr_rank.index[-1]
pair_min = corr_rank.index[0]
print(f'MIN Correlation pair is {pair_min}')
print(f'MAX Correlation pair is {pair_max}')
###Output
_____no_output_____
###Markdown
Load the data. Change Sheet and file name
###Code
# Load data: required Everywhere
path="proshares_analysis_data.xlsx"
df_ex_ori=pd.read_excel(path, sheet_name='merrill_factors')
df_ex_ori=df_ex_ori.set_index('date')
df_ex_ori.head()
# annual_fac=12
# #df_ex1[['SPY US Equity', 'EEM US Equity', 'EFA US Equity', 'EUO US Equity', 'IWM US Equity']]=\
# # df_ex[['SPY US Equity', 'EEM US Equity', 'EFA US Equity', 'EUO US Equity', 'IWM US Equity']]-df_ex['USGG3M Index']
# df_ex[['SPY US Equity']]=df_ex[['SPY US Equity']]-df_ex[['USGG3M']]
df_ex_ori['SPY US Equity']=df_ex_ori['SPY US Equity'] -df_ex_ori['USGG3M Index']
df_ex_ori['EEM US Equity']=df_ex_ori['EEM US Equity'] -df_ex_ori['USGG3M Index']
df_ex_ori['EFA US Equity']=df_ex_ori['EFA US Equity'] -df_ex_ori['USGG3M Index']
df_ex_ori['EUO US Equity']=df_ex_ori['EUO US Equity'] -df_ex_ori['USGG3M Index']
df_ex_ori['IWM US Equity']=df_ex_ori['IWM US Equity'] -df_ex_ori['USGG3M Index']
df_ex2=df_ex_ori[['SPY US Equity', 'EEM US Equity','EFA US Equity','EUO US Equity','IWM US Equity']]
df_ex=df_ex2.copy()
df_ex
sum_stats=performanceMetrics(df_ex, 12)
# 2a) find the Tangency portfolio
tangency_weights(df_ex)
wts = pd.DataFrame(index=df_ex.columns)
wts['tangency'] = tangency_weights(df_ex)
df_ex_tan = pd.DataFrame(df_ex @ wts['tangency'],columns=['tangency'])
display(performanceMetrics(pd.concat([df_ex,df_ex_tan],axis=1),annualization=12))
display(tangency_weights(df_ex))
# 2.2 Target Returns
target_mean =.02
mu_tan = df_ex.mean() @ wts['tangency'] # Average for each asset* weights of each assets.
delta = target_mean / mu_tan
wts['optimal'] = wts['tangency'] * delta
# list the assets sharpe ratios in a column to demonstrate not highly correlated with optimal weights
comp = pd.concat([wts[['optimal']],sum_stats['Sharpe']],axis=1)
corr_sharpe_wts = comp.corr().values[0][1]
display(comp.sort_values('optimal',ascending=False))
display(comp.corr())
print(f'Total share in risky assets is {delta:.4f}.\nTotal share in risk-free asset is {1-delta:.4f}')
print(f'Correlation between an assets Sharpe ratio and its weight is {corr_sharpe_wts:.4f}.')
print('It is not riskless return')
#2.3 Report Mean, Vol and sharpe of the optimal
compare_with_other_port = performanceMetrics(df_ex @ wts,annualization=12)
display(compare_with_other_port)
print('Mean Vol and Sharpe Shown as above')
# 2.4 Use data through 2018
df_ex21=df_ex.copy()['2019':]
df_ex18= df_ex.loc[:'2018']
tangency_weights(df_ex18)
wts = pd.DataFrame(index=df_ex18.columns)
wts['tangency'] = tangency_weights(df_ex18)
df_ex_tan = pd.DataFrame(df_ex18 @ wts['tangency'],columns=['tangency'])
# display(performanceMetrics(pd.concat([df_ex,df_ex_tan],axis=1),annualization=12))
display(tangency_weights(df_ex18))
target_mean =.02
mu_tan = df_ex.mean() @ wts['tangency'] # Average for each asset* weights of each assets.
delta = target_mean / mu_tan
wts['optimal'] = wts['tangency'] * delta
# list the assets sharpe ratios in a column to demonstrate not highly correlated with optimal weights
comp = pd.concat([wts[['optimal']],sum_stats['Sharpe']],axis=1)
corr_sharpe_wts = comp.corr().values[0][1]
# display(comp.sort_values('optimal',ascending=False))
# display(comp.corr())
display(wts[['optimal']])
print('This is new optimal')
df_ex_21=df_ex.loc['2018':]
df_ex_new = pd.DataFrame(df_ex21 @ wts['optimal'],columns=['new'])
display(performanceMetrics(df_ex_new))
#3.1
rhs = df_ex['SPY US Equity']
lhs = df_ex['EEM US Equity']
reg = sm.OLS(lhs, rhs, missing='drop').fit()
beta = reg.params['SPY US Equity']
print(beta)
print('for every dollar we will invest $0.9257')
a=df_ex['EEM US Equity'] -df_ex['SPY US Equity']*beta
print('the mean is' )
print(a.mean())
print('std is ')
print(a.std())
print('sharpe is')
print(a.mean()/a.std())
print('EEM mean is ')
print(df_ex['EEM US Equity'].mean())
# performanceMetrics(hedge,annualization=1)
# SR_smb_new_m = (smb_new.mean()) / (smb_new.std())
# SR_smb_new_a = (smb_new.mean()*12) / (smb_new.std()*np.sqrt(12))
###Output
the mean is
-0.007792404874129221
std is
0.036318186549459466
sharpe is
-0.21455930525378056
EEM mean is
0.0031486970460246804
###Markdown
IT is a hedging and the number is different.
###Code
# 4. Modeling Risk:
path="proshares_analysis_data.xlsx"
df=pd.read_excel(path, sheet_name='merrill_factors')
df=df.set_index('date')
df.head()
rhs = df_ex['SPY US Equity']
lhs = df_ex['EEM US Equity']
reg = sm.OLS(lhs, rhs, missing='drop').fit()
beta = reg.params['SPY US Equity']
r2 = reg.rsquared_adj
# 4.1 a)
df_subset = df
table4 = pd.DataFrame(columns=['h', 'tilde_mu_hat'])
table4['h'] = [5, 10, 15, 20, 25, 30]
table4 = table4.set_index('h')
def p(h, tilde_mu, tilde_sigma):
x = - np.sqrt(h) * tilde_mu / tilde_sigma
val = scipy.stats.norm.cdf(x)
return val
tilde_mu = df['EEM US Equity'].mean()-df['SPY US Equity'].mean()
tilde_sigma = np.sqrt((df['EEM US Equity']. std())**2-(df['SPY US Equity']. std())**2)
table4['tilde_mu_hat'] = p(table4.index, tilde_mu=tilde_mu, tilde_sigma=tilde_sigma)
table4.T.style.set_caption('Solution Table 4: Shortfall probability estimates using 1965-1999 data.')
# The probability that will exceed SPY is increasing
#4.2 Rolling Volatility
x = np.sqrt(8) * (30*0.06/8-22/8*subtable_log_2000_2021.loc['r_M','mean 2000-2021']-subtable_log_1965_2021.loc['r_M','mean 1965-2021'])/sigma
sigma_roll = df['tilde_r'].shift(1).dropna().rolling(60).apply(lambda x: ((x**2).sum()/len(x))**(0.5))
###Output
_____no_output_____ |
_upcoming_posts/2017-07-18.ipynb | ###Markdown
Some simple non-linear regression modelsHere I will explore some basic extensions of the linear regression to perform non-linear regressions.
###Code
from math import *
import numpy as np
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataI will use essentially fake data to illustrate the different methods. 1-D relationshipLet us start with the following relationship :$$ y = 2x+\frac{1.5x^3}{e^{x/2}}+5sin(\frac{\pi x}{2})+3sin(\pi x) +\varepsilon $$where $\varepsilon \sim \cal N (0,1)$ is a random noise.
###Code
nsamples=500
x_train = np.linspace(0,10,nsamples)
y_true = 2*x_train-1.5*x_train**3/np.exp(x_train/2)+5*np.sin(pi*x_train/2)+3*np.sin(pi*x_train)
y_train = y_true+np.random.normal(scale = 1.5,size = len(y_true))
plt.plot(x_train,y_train,'o',x_train,y_true,'r-')
###Output
_____no_output_____
###Markdown
Now we need to create a test set, and a metric to evaluate the performance of the regression model.
###Code
x_test = np.sort(np.random.uniform(0,10,100))
y_test = 2*x_test-1.5*x_test**3/np.exp(x_test/2)+5*np.sin(pi*x_test/2)+3*np.sin(pi*x_test)
def rmse(y_pred,y=y_test):
return sum((y_pred-y)**2)/len(y_pred)
###Output
_____no_output_____
###Markdown
Polynomial regression Let us start with a simple polynomial regression of the form :$$ y = \sum_{i=1}^p \alpha_i x^i $$The maximum degree of the polynomials is a hyperparameter that need to be chosen according to the performance on the test set.
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
degree = np.arange(4,22,2)
perf = []
pred = np.ndarray((len(x_test),len(degree)))
for i,d in enumerate(degree):
model = make_pipeline(PolynomialFeatures(degree=d),LinearRegression())
#note : the reshape operation is due to a deprecation in sklearn : 1d array are no longer accepted as inputs.
model.fit(x_train.reshape(-1,1),y_train)
pred[:,i] = model.predict(x_test.reshape(-1,1))
perf.append(rmse(pred[:,i]))
plt.plot(degree,perf)
###Output
_____no_output_____
###Markdown
Best performance is obtained with a polynomial of order 14, which seems to be a lot. After $p=18$, the model overfits quickly.
###Code
plt.plot(x_test,pred[:,5],'o',x_test,y_test,'-r')
###Output
_____no_output_____
###Markdown
Not too bad.
###Code
plt.plot(x_test,pred[:,len(degree)-1],'o',x_test,y_test,'-r')
###Output
_____no_output_____
###Markdown
This one is unable to reproduce the bahavior at small x. Local regression (LOESS) Boosting Splines Boosted splines
###Code
###Output
_____no_output_____ |
LDA extract topics.ipynb | ###Markdown
In this notebook, we will train an Latend Dirichlet Allocation (LDA) model on tweets to learn a set of words which commonly appear together, hopefully corresponding to a topic.We will apply the LDA training on the whole corpus of our tweets and extract 10 topics. Additionally, we will visualize the results using the pyLDAvis library.Following, we will take these results to a different notebook for analysis. There, we will assign a topic distribution on each tweet given the words used in it and we will sum the topic distributions of all tweets corresponding to a state to conclude to the topic distribution per state.
###Code
from pymongo import MongoClient
import json
client = MongoClient()
db = client.Twitter
import pandas as pd
import time
import re
from nltk.tokenize import RegexpTokenizer
import HTMLParser # In Python 3.4+ import html
import nltk
from nltk.corpus import stopwords
###Output
_____no_output_____
###Markdown
LOAD data from Mongo
###Code
start_time = time.time()
#we are filtering out tweets of different languages and outside of the US
filter_query = {
"$and":[ {"place.country_code":"US"}, { "lang": "en" } ]
}
#we are keeping only our fields of interest
columns_query = {
'text':1,
'entities.hashtags':1,
'entities.user_mentions':1,
'place.full_name':1,
'place.bounding_box':1
}
tweets = pd.DataFrame(list(db.tweets.find(
filter_query,
columns_query
)#.limit()
)
)
elapsed_time = time.time() - start_time
print elapsed_time
###Output
16.0380530357
###Markdown
Preproccessing
###Code
#parse state variable
tweets['state'] = map(lambda place_dict: place_dict['full_name'][-2:] ,tweets['place'])
tweets['state'].value_counts().head()
# #for one state only
# state = 'CA'
# tweets = tweets[tweets['state']==state]
len(tweets)
def Clean(unescaped_tweet):
'''This function takes a tweet as input and returns a tokenizing list.'''
tokenizer = RegexpTokenizer(r'\w+')
cleaned_tweet_tokens = tokenizer.tokenize(unescaped_tweet.lower())
return cleaned_tweet_tokens
start_time = time.time() #Starts time
tweets['text'] = tweets['text'].apply(lambda tweet: re.sub(r"http\S+", "", tweet))
#########################################################
def trump_mention(tweet):
trump_count = 0
if ('trump' in tweet.lower()) or ('donald' in tweet.lower()):
return True
return False
tweets['Trump'] = tweets['text'].apply(lambda tweet: trump_mention(tweet))
##############################################################
#tweet mentions --->@
#tweet hashtags --->#
#create two column with the the hashtags and the mentions
tweets['mentions'] = tweets['text'].apply(lambda tweet: re.findall(r'\@\w+',tweet))
tweets['hashtags'] = tweets['text'].apply(lambda tweet: re.findall(r'\#\w+',tweet))
#remove hashtags and mentions
tweets['text'] = tweets['text'].apply(lambda tweet: re.sub(r"\@\w+" , "", tweet))
tweets['text'] = tweets['text'].apply(lambda tweet: re.sub(r"\#\w+" , "", tweet))
#remove the numbers from the text
tweets['text'] =tweets['text'].apply(lambda tweet: ''.join([i for i in tweet if not i.isdigit()]))
trump_count = 0
clinton_count =0
#remove the names and surnames of the two candidates
tweets['text'] =tweets['text'].apply(lambda tweet: re.sub(r"Trump" , "", tweet))
tweets['text'] =tweets['text'].apply(lambda tweet: re.sub(r"Clinton" , "", tweet))
tweets['text'] =tweets['text'].apply(lambda tweet: re.sub(r"Donald" , "", tweet))
tweets['text'] =tweets['text'].apply(lambda tweet: re.sub(r"Hillary" , "", tweet))
tweets['text'] =tweets['text'].apply(lambda tweet: re.sub(r"USA" , "", tweet))
tweets['text'] =tweets['text'].apply(lambda tweet: re.sub(r"amp" , "", tweet))
#tokenize the text and add an extra column
tweets['token'] = tweets['text'].apply(lambda tweet: Clean(tweet))
tweets['token'] = tweets['token'].apply(lambda x: list(set(x)-set(stopwords.words('english'))))
elapsed_time = time.time() - start_time #time ends
print elapsed_time
tweets.head()
tweets.head()
#test['tags'] = map(lambda tweet: map(lambda tweet: tweet['text'] , tweet['entities']['hashtags']) if tweet['entities']['hashtags'] != None else None, raw_tweet[:100])
#tweets['text'][9]
doc_complete = tweets['token'].tolist()
doc_complete[:2]
import gensim
import pickle
import gensim
from gensim import corpora
# Creating the term dictionary of our courpus, where every unique term is assigned an index
dictionary = corpora.Dictionary(doc_complete)
pickle.dump(dictionary, open( 'dictionary2.pickle', "wb" ) )
# Converting list of documents (corpus) into Document Term Matrix using dictionary prepared above.
doc_term_matrix = [dictionary.doc2bow(doc) for doc in doc_complete]
pickle.dump(doc_term_matrix, open( 'doc_term_matrix.pickle', "wb" ) )
Lda = gensim.models.ldamulticore.LdaMulticore
nr_topics = 10
nr_passes = 100
start_time = time.time()
# Creating the object for LDA model using gensim library
# Running and Trainign LDA model on the document term matrix.
ldamodel = Lda(doc_term_matrix, num_topics=nr_topics, id2word = dictionary, passes=nr_passes)
elapsed_time = time.time() - start_time
print 'Topic modelling for', nr_topics,'topics,', nr_passes,'passes,',len(tweets),'tweets:','\ncomplete in',elapsed_time/60.,'minutes'
# Runtimes:
# Florida (~4K) ~ 16 min on 10 topics, 300 passes
# CA (57K) - 48 min on 10 topics 300 passes
# can we do it on the whole data -> take the topics and classify each tweet within them.
# then we have discrete sets with topics and words weights in each topic.
# so then isn't a tweet represented by the appropriate values?
# Print 2 topics and describe then with 4 words.
topics = ldamodel.print_topics(num_topics=nr_topics, num_words=50)
i=0
for topic in topics:
print topic
print ""
i+=1
###Output
_____no_output_____
###Markdown
save/load the model
###Code
import pickle
nr_topics = 10
nr_passes = 100
state = 'allstates'
name = "trained models/lda/%s_%itopics_%ipasses.pickle"%(state,nr_topics,nr_passes)
print "Procceed to save model in:", name
pickle.dump(ldamodel, open( name, "wb" ) )
#load
ldamodel = pickle.load(open(name,'rb'))
###Output
/home/antonis/anaconda2/envs/USelections/lib/python2.7/site-packages/urllib3/contrib/pyopenssl.py:46: DeprecationWarning: OpenSSL.rand is deprecated - you should use os.urandom instead
import OpenSSL.SSL
/home/antonis/anaconda2/envs/USelections/lib/python2.7/site-packages/scipy/sparse/sparsetools.py:20: DeprecationWarning: `scipy.sparse.sparsetools` is deprecated!
scipy.sparse.sparsetools is a private module for scipy.sparse, and should not be used.
_deprecated()
###Markdown
efforts with pyLDAvis (visualize the LDA topics)
###Code
import time
import pyLDAvis.gensim
pyLDAvis.enable_notebook()
#load the LDA results (model, dictionary and corpus)
start_time = time.time()
ldamodel = pickle.load(open('trained models/lda/allstates_10topics_100passes.pickle'))
dictandcorpus = pickle.load(open('trained models/lda/Dictionary.pickle'))
c = dictandcorpus[1]
d = dictandcorpus[0]
del dictandcorpus
elapsed_time = time.time() - start_time
print elapsed_time
data = pyLDAvis.gensim.prepare(ldamodel, c, d)
data
###Output
_____no_output_____
###Markdown
Interactive version available [here](https://codepen.io/Martin13131/full/zEwxxx/) Also available as an html document in this git repository (LDA_topics.html)
###Code
#save results as an html file
pyLDAvis.save_html(data, open('LDA topics.html','wb'))
###Output
_____no_output_____ |
scripts/d21-en/mxnet/chapter_computer-vision/multiscale-object-detection.ipynb | ###Markdown
Multiscale Object DetectionIn :numref:`sec_anchor`, we generated multiple anchor boxes centered on each pixel of the input image. These anchor boxes are used to sample different regions of the input image. However, if anchor boxes are generated centered on each pixel of the image, soon there will be too many anchor boxes for us to compute. For example, we assume that the input image has a height and a width of 561 and 728 pixels respectively. If five different shapes of anchor boxes are generated centered on each pixel, over two million anchor boxes ($561 \times 728 \times 5$) need to be predicted and labeled on the image.It is not difficult to reduce the number of anchor boxes. An easy way is to apply uniform sampling on a small portion of pixels from the input image and generate anchor boxes centered on the sampled pixels. In addition, we can generate anchor boxes of varied numbers and sizes on multiple scales. Notice that smaller objects are more likely to be positioned on the image than larger ones. Here, we will use a simple example: Objects with shapes of $1 \times 1$, $1 \times 2$, and $2 \times 2$ may have 4, 2, and 1 possible position(s) on an image with the shape $2 \times 2$. Therefore, when using smaller anchor boxes to detect smaller objects, we can sample more regions; when using larger anchor boxes to detect larger objects, we can sample fewer regions.To demonstrate how to generate anchor boxes on multiple scales, let us read an image first. It has a height and width of $561 \times 728$ pixels.
###Code
%matplotlib inline
from mxnet import image, np, npx
from d2l import mxnet as d2l
npx.set_np()
img = image.imread('../img/catdog.jpg')
h, w = img.shape[0:2]
h, w
###Output
_____no_output_____
###Markdown
In :numref:`sec_conv_layer`, the 2D array output of the convolutional neural network (CNN) is calleda feature map. We can determine the midpoints of anchor boxes uniformly sampledon any image by defining the shape of the feature map.The function `display_anchors` is defined below. We are going to generate anchor boxes `anchors` centered on each unit (pixel) on the feature map `fmap`. Since the coordinates of axes $x$ and $y$ in anchor boxes `anchors` have been divided by the width and height of the feature map `fmap`, values between 0 and 1 can be used to represent relative positions of anchor boxes in the feature map. Since the midpoints of anchor boxes `anchors` overlap with all the units on feature map `fmap`, the relative spatial positions of the midpoints of the `anchors` on any image must have a uniform distribution. Specifically, when the width and height of the feature map are set to `fmap_w` and `fmap_h` respectively, the function will conduct uniform sampling for `fmap_h` rows and `fmap_w` columns of pixels and use them as midpoints to generate anchor boxes with size `s` (we assume that the length of list `s` is 1) and different aspect ratios (`ratios`).
###Code
def display_anchors(fmap_w, fmap_h, s):
d2l.set_figsize()
# The values from the first two dimensions will not affect the output
fmap = np.zeros((1, 10, fmap_h, fmap_w))
anchors = npx.multibox_prior(fmap, sizes=s, ratios=[1, 2, 0.5])
bbox_scale = np.array((w, h, w, h))
d2l.show_bboxes(
d2l.plt.imshow(img.asnumpy()).axes, anchors[0] * bbox_scale)
###Output
_____no_output_____
###Markdown
We will first focus on the detection of small objects. In order to make it easier to distinguish upon display, the anchor boxes with different midpoints here do not overlap. We assume that the size of the anchor boxes is 0.15 and the height and width of the feature map are 4. We can see that the midpoints of anchor boxes from the 4 rows and 4 columns on the image are uniformly distributed.
###Code
display_anchors(fmap_w=4, fmap_h=4, s=[0.15])
###Output
_____no_output_____
###Markdown
We are going to reduce the height and width of the feature map by half and use a larger anchor box to detect larger objects. When the size is set to 0.4, overlaps will occur between regions of some anchor boxes.
###Code
display_anchors(fmap_w=2, fmap_h=2, s=[0.4])
###Output
_____no_output_____
###Markdown
Finally, we are going to reduce the height and width of the feature map by half and increase the anchor box size to 0.8. Now the midpoint of the anchor box is the center of the image.
###Code
display_anchors(fmap_w=1, fmap_h=1, s=[0.8])
###Output
_____no_output_____ |
Ro_Davies_assignment_kaggle_challenge_2.ipynb | ###Markdown
Lambda School Data Science, Unit 2: Predictive Modeling Kaggle Challenge, Module 2 Assignment- [ ] Read [“Adopting a Hypothesis-Driven Workflow”](https://outline.com/5S5tsB), a blog post by a Lambda DS student about the Tanzania Waterpumps challenge.- [ ] Continue to participate in our Kaggle challenge.- [ ] Try Ordinal Encoding.- [ ] Try a Random Forest Classifier.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Do more exploratory data analysis, data cleaning, feature engineering, and feature selection.- [ ] Try other [categorical encodings](https://contrib.scikit-learn.org/categorical-encoding/).- [ ] Get and plot your feature importances.- [ ] Make visualizations and share on Slack. ReadingTop recommendations in _**bold italic:**_ Decision Trees- A Visual Introduction to Machine Learning, [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/), and _**[Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)**_- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) Random Forests- [_An Introduction to Statistical Learning_](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 8: Tree-Based Methods- [Coloring with Random Forests](http://structuringtheunstructured.blogspot.com/2017/11/coloring-with-random-forests.html)- _**[Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/)**_ Categorical encoding for trees- [Are categorical variables getting lost in your random forests?](https://roamanalytics.com/2016/10/28/are-categorical-variables-getting-lost-in-your-random-forests/)- [Beyond One-Hot: An Exploration of Categorical Variables](http://www.willmcginnis.com/2015/11/29/beyond-one-hot-an-exploration-of-categorical-variables/)- _**[Categorical Features and Encoding in Decision Trees](https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931)**_- _**[Coursera — How to Win a Data Science Competition: Learn from Top Kagglers — Concept of mean encoding](https://www.coursera.org/lecture/competitive-data-science/concept-of-mean-encoding-b5Gxv)**_- [Mean (likelihood) encodings: a comprehensive study](https://www.kaggle.com/vprokopev/mean-likelihood-encodings-a-comprehensive-study)- [The Mechanics of Machine Learning, Chapter 6: Categorically Speaking](https://mlbook.explained.ai/catvars.html) Imposter Syndrome- [Effort Shock and Reward Shock (How The Karate Kid Ruined The Modern World)](http://www.tempobook.com/2014/07/09/effort-shock-and-reward-shock/)- [How to manage impostor syndrome in data science](https://towardsdatascience.com/how-to-manage-impostor-syndrome-in-data-science-ad814809f068)- ["I am not a real data scientist"](https://brohrer.github.io/imposter_syndrome.html)- _**[Imposter Syndrome in Data Science](https://caitlinhudon.com/2018/01/19/imposter-syndrome-in-data-science/)**_
###Code
# If you're in Colab...
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install required python packages:
# category_encoders, version >= 2.0
# pandas-profiling, version >= 2.0
# plotly, version >= 4.0
!pip install --upgrade category_encoders pandas-profiling plotly
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge.git
!git pull origin master
# Change into directory for module
os.chdir('module2')
import pandas as pd
from sklearn.model_selection import train_test_split
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv('../data/tanzania/train_features.csv'),
pd.read_csv('../data/tanzania/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv('../data/tanzania/test_features.csv')
sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv')
train, val = train_test_split(train, train_size=0.80, test_size=0.20, stratify = train['status_group'], random_state=24)
import numpy as np
def fix(X):
X = X.copy()
X['latitude'] = X['latitude'].replace(-2e-08, 0)
cols_with_zeroes = ['longitude', 'latitude']
for col in cols_with_zeroes:
X[col] = X[col].replace(0, np.nan)
X = X.drop(columns='quantity_group')
return X
train = fix(train)
val = fix(val)
test = fix(test)
target = 'status_group'
train_features = train.drop(columns=[target])
numeric_features = train_features.select_dtypes(include='number').columns.tolist()
cardinality = train_features.select_dtypes(exclude='number').nunique()
categorical_features = cardinality[cardinality <= 50].index.tolist()
features = numeric_features + categorical_features
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
from sklearn.datasets import make_blobs
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.pipeline import make_pipeline
import category_encoders as ce
from sklearn.impute import SimpleImputer
%%time
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy = 'median'),
RandomForestClassifier(n_estimators=100, random_state=21, n_jobs = -1)
)
pipeline.fit(X_train, y_train)
print('val accuracy:', pipeline.score(X_val, y_val))
###Output
val accuracy: 0.8062289562289562
CPU times: user 17.8 s, sys: 148 ms, total: 18 s
Wall time: 9.85 s
|
experiments_adverserial_debiasing.ipynb | ###Markdown
Importing Libraries
###Code
from collections import defaultdict
from operator import itemgetter
from pathlib import Path
import numpy as np
import pandas as pd
from collections import namedtuple
from tabulate import tabulate
import re
import torch
import os
from adversarial_debiasing import AdversarialDebiasing
from load_data import load_data, transform_data, Datapoint
from load_vectors import load_pretrained_vectors, load_vectors
import config
import utility_functions
import qualitative_evaluation
import gensim
import gzip
import pickle
###Output
_____no_output_____
###Markdown
For autoreloading changes made in other python scripts
###Code
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Loading the word vectors dictionary
###Code
# For Wikipedia2Vec - pass "Wikipedia2Vec"
# For Glove - pass "Glove"
# For GoogleNews (Word2Vec) - pass "GoogleNews"
word_vectors = load_pretrained_vectors("GoogleNews")
###Output
_____no_output_____
###Markdown
Load the google analogies training dataset
###Code
analogy_dataset = load_data()
analogy_dataset[0:10]
###Output
_____no_output_____
###Markdown
Transformation of the raw data such that it includes the embeddings of the words in consideration
###Code
# Transform the data such that it includes the embeddings of the words in consideration
transformed_analogy_dataset, gender_subspace = transform_data(word_vectors, analogy_dataset, use_boluk = False)
###Output
_____no_output_____
###Markdown
Obtaining the dimensionality of the word embeddings
###Code
word_embedding_dim = transformed_analogy_dataset[0].gt_embedding.shape[0]
print("Dimensions of the word embedding : {}".format(word_embedding_dim))
###Output
_____no_output_____
###Markdown
Testing the transformed analogy dataset
###Code
assert transformed_analogy_dataset[0].analogy_embeddings.shape[0] == word_embedding_dim * 3
assert transformed_analogy_dataset[0].gt_embedding.shape[0] == word_embedding_dim
assert transformed_analogy_dataset[0].protected.shape[0] == 1
print("Dimensions of the network input : {}".format(transformed_analogy_dataset[0].analogy_embeddings.shape))
print("Dimensions of the ground-truth embedding : {}".format(transformed_analogy_dataset[0].gt_embedding.shape))
print("Dimensions of the ground-truth protected variable : {}".format(transformed_analogy_dataset[0].protected.shape))
###Output
_____no_output_____
###Markdown
Grid Search for Hyperparameters
###Code
# To run the grid-search and obtain the np.dot(w.T, g) values
learning_rate_list = [2 ** -12, 2 ** -6, 2 ** -3]
adversary_loss_weight_list = [1.0, 0.5, 0.1]
# For the saved model checkpoints pertaining to the word embedding type
word_embedding_type = 'GNews'
# Performing the grid search
utility_functions.grid_search(learning_rate_list, adversary_loss_weight_list, word_embedding_dim, gender_subspace, transformed_analogy_dataset, word_embedding_type, 'models')
###Output
_____no_output_____
###Markdown
Function definition for loading a model with each hyperparameter configuration
###Code
def load_model(model_path: Path, word_embedding_dim, gender_subspace):
# Obtaining the state dictionary of the respective model
state_dict = torch.load(str(model_path), map_location=torch.device('cpu'))
# Creating an instance of the model
model = AdversarialDebiasing(
seed = 42,
word_embedding_dim = word_embedding_dim,
num_epochs = 500,
debias = False,
gender_subspace = gender_subspace,
batch_size = 256,
adversary_loss_weight = 0.1,
classifier_learning_rate = 2 ** -6,
adversary_learning_rate = 2 ** -6
)
# Setting the respective weights
model.W1 = state_dict["W1"]
model.W2 = state_dict["W2"]
# Returning the model
return model
###Output
_____no_output_____
###Markdown
Accumulating the models pertaining to each hyperparameter configuration and word embedding
###Code
# Accumulator dictionaries for the biased and debiased models
debiased_models = defaultdict(list)
biased_models = defaultdict(list)
# Tuple Definition for each model configuration
ModelResult = namedtuple('ModelResult', ['best_model', 'last_model', 'embedding_type', 'learning_rate', 'adversary_weight', 'debiased'])
# For each saved model
for model_base_path in [Path('models/debiased'), Path('models/non_debiased')]:
l, debiased = (biased_models, False) if 'non_debiased' in str(model_base_path) else (debiased_models, True)
for model_path in model_base_path.iterdir():
if '_last' in str(model_path):
continue
m = re.search('^([A-Za-z]+)_([\d.]+)_([\d.]+)(_last){0,1}.pckl$', str(model_path.name))
embeddings = m.group(1)
learning_rate = m.group(2)
adversary_weight = m.group(3)
best_model = load_model(model_path, word_embedding_dim, gender_subspace)
last_model_path = model_path.parent / f"{model_path.stem}_last{model_path.suffix}"
last_model = load_model(last_model_path, word_embedding_dim, gender_subspace)
l[embeddings].append(ModelResult(best_model, last_model, embeddings, learning_rate, adversary_weight, debiased))
###Output
_____no_output_____
###Markdown
Test printing the accumulator lists
###Code
# print(debiased_models, len(debiased_models))
# print(biased_models, len(biased_models))
# print(debiased_models['GNews'])
###Output
defaultdict(<class 'list'>, {'Glove': [ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FA8448>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FA8788>, embedding_type='Glove', learning_rate='0.000244140625', adversary_weight='0.1', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1C31E4988>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1C31E4688>, embedding_type='Glove', learning_rate='0.000244140625', adversary_weight='0.5', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1C31E4C08>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1C31E4AC8>, embedding_type='Glove', learning_rate='0.000244140625', adversary_weight='1.0', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6485548>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6485948>, embedding_type='Glove', learning_rate='0.015625', adversary_weight='0.1', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1C31E4C88>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1C31E49C8>, embedding_type='Glove', learning_rate='0.015625', adversary_weight='0.5', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6485788>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6485448>, embedding_type='Glove', learning_rate='0.015625', adversary_weight='1.0', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD61E4C8>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD61ED48>, embedding_type='Glove', learning_rate='0.125', adversary_weight='0.1', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD61E588>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD61E1C8>, embedding_type='Glove', learning_rate='0.125', adversary_weight='0.5', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD61ECC8>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD61E608>, embedding_type='Glove', learning_rate='0.125', adversary_weight='1.0', debiased=True)], 'GNews': [ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD61E9C8>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1C31CB108>, embedding_type='GNews', learning_rate='0.000244140625', adversary_weight='0.1', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD604F08>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD61E8C8>, embedding_type='GNews', learning_rate='0.000244140625', adversary_weight='0.5', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD61EC08>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD604AC8>, embedding_type='GNews', learning_rate='0.000244140625', adversary_weight='1.0', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD604F88>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04308B0C8>, embedding_type='GNews', learning_rate='0.015625', adversary_weight='0.1', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04308B048>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04308BA08>, embedding_type='GNews', learning_rate='0.015625', adversary_weight='0.5', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04308B3C8>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04308B188>, embedding_type='GNews', learning_rate='0.015625', adversary_weight='1.0', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04308BD48>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04307DC48>, embedding_type='GNews', learning_rate='0.125', adversary_weight='0.1', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04307D548>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04307DB48>, embedding_type='GNews', learning_rate='0.125', adversary_weight='0.5', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04307D348>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04307D708>, embedding_type='GNews', learning_rate='0.125', adversary_weight='1.0', debiased=True)], 'WikipediaVec': [ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04307D848>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04307D0C8>, embedding_type='WikipediaVec', learning_rate='0.000244140625', adversary_weight='0.1', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04307D608>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A043087388>, embedding_type='WikipediaVec', learning_rate='0.000244140625', adversary_weight='0.5', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A043087408>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A043087948>, embedding_type='WikipediaVec', learning_rate='0.000244140625', adversary_weight='1.0', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1C31D4CC8>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1C31D4748>, embedding_type='WikipediaVec', learning_rate='0.015625', adversary_weight='0.1', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A043087D48>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A043087F08>, embedding_type='WikipediaVec', learning_rate='0.015625', adversary_weight='0.5', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A043087A08>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A043087C48>, embedding_type='WikipediaVec', learning_rate='0.015625', adversary_weight='1.0', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A043083C08>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A043083888>, embedding_type='WikipediaVec', learning_rate='0.125', adversary_weight='0.1', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A0430830C8>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A043083848>, embedding_type='WikipediaVec', learning_rate='0.125', adversary_weight='0.5', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A043083648>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A043083588>, embedding_type='WikipediaVec', learning_rate='0.125', adversary_weight='1.0', debiased=True)]}) 3
defaultdict(<class 'list'>, {'GNews': [ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A0430838C8>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A043083C88>, embedding_type='GNews', learning_rate='0.000244140625', adversary_weight='0.1', debiased=False), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FAA608>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CAD2DB88>, embedding_type='GNews', learning_rate='0.000244140625', adversary_weight='0.5', debiased=False), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD601048>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CAD1D288>, embedding_type='GNews', learning_rate='0.000244140625', adversary_weight='1.0', debiased=False), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CAD1D088>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD601208>, embedding_type='GNews', learning_rate='0.015625', adversary_weight='0.1', debiased=False), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD601648>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD6016C8>, embedding_type='GNews', learning_rate='0.015625', adversary_weight='0.5', debiased=False), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1C30A18C8>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1C30A1C48>, embedding_type='GNews', learning_rate='0.015625', adversary_weight='1.0', debiased=False), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FAAF08>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FAAF48>, embedding_type='GNews', learning_rate='0.125', adversary_weight='0.1', debiased=False), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FAAEC8>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FAAAC8>, embedding_type='GNews', learning_rate='0.125', adversary_weight='0.5', debiased=False), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FAF988>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FAFB88>, embedding_type='GNews', learning_rate='0.125', adversary_weight='1.0', debiased=False)], 'WikipediaVec': [ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FAFC88>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FAFE88>, embedding_type='WikipediaVec', learning_rate='0.000244140625', adversary_weight='0.1', debiased=False), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FAFE48>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FAFD08>, embedding_type='WikipediaVec', learning_rate='0.000244140625', adversary_weight='0.5', debiased=False), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FAF808>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FAFF88>, embedding_type='WikipediaVec', learning_rate='0.000244140625', adversary_weight='1.0', debiased=False), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FAF648>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FAF508>, embedding_type='WikipediaVec', learning_rate='0.015625', adversary_weight='0.1', debiased=False), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FB49C8>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FB4C08>, embedding_type='WikipediaVec', learning_rate='0.015625', adversary_weight='0.5', debiased=False), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FB4588>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FB4F88>, embedding_type='WikipediaVec', learning_rate='0.015625', adversary_weight='1.0', debiased=False), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FB4388>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FB4C88>, embedding_type='WikipediaVec', learning_rate='0.125', adversary_weight='0.1', debiased=False), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FB48C8>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FB46C8>, embedding_type='WikipediaVec', learning_rate='0.125', adversary_weight='0.5', debiased=False), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FBE648>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FBE988>, embedding_type='WikipediaVec', learning_rate='0.125', adversary_weight='1.0', debiased=False)]}) 2
[ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD61E9C8>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1C31CB108>, embedding_type='GNews', learning_rate='0.000244140625', adversary_weight='0.1', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD604F08>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD61E8C8>, embedding_type='GNews', learning_rate='0.000244140625', adversary_weight='0.5', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD61EC08>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD604AC8>, embedding_type='GNews', learning_rate='0.000244140625', adversary_weight='1.0', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1CD604F88>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04308B0C8>, embedding_type='GNews', learning_rate='0.015625', adversary_weight='0.1', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04308B048>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04308BA08>, embedding_type='GNews', learning_rate='0.015625', adversary_weight='0.5', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04308B3C8>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04308B188>, embedding_type='GNews', learning_rate='0.015625', adversary_weight='1.0', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04308BD48>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04307DC48>, embedding_type='GNews', learning_rate='0.125', adversary_weight='0.1', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04307D548>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04307DB48>, embedding_type='GNews', learning_rate='0.125', adversary_weight='0.5', debiased=True), ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04307D348>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04307D708>, embedding_type='GNews', learning_rate='0.125', adversary_weight='1.0', debiased=True)]
###Markdown
Printing the correlation metric $w^{T}g$ for each hyperparameter configuration (Row = Adversary Loss Weight, Column = Learning Rate)
###Code
# The type of word embedding upon which the models were trained
word_embedding_type = 'GNews' # can also specify 'WikipediaVec' or 'Glove'
# Obtaining all the hyperparameter configurations and sorting them
learning_rates = list(set(model.learning_rate for models in debiased_models.values() for model in models))
adversary_weights = list(set(model.adversary_weight for models in debiased_models.values() for model in models))
adversary_weights = sorted(adversary_weights)
learning_rates = sorted(learning_rates)
# Dataframe to show the matrix of correlations metrics for each hyperparameter configuration
box_df_debiased = pd.DataFrame([], columns=learning_rates, index=adversary_weights)
for model_result in debiased_models['GNews']:
box_df_debiased.loc[model_result.adversary_weight, model_result.learning_rate] = np.dot(model_result.best_model.W1.clone().detach().numpy().T, gender_subspace.T).item()
box_df_debiased
###Output
_____no_output_____
###Markdown
Test Printing
###Code
lr = '0.000244140625'
debiased_model_result = [m for m in debiased_models['GNews'] if m.learning_rate == lr and m.adversary_weight == '0.1'][0]
print(debiased_model_result)
debiased_model = debiased_model_result.best_model
biased_model_result = [m for m in biased_models['GNews'] if m.learning_rate == lr and m.adversary_weight == '0.1'][0]
print(biased_model_result)
biased_model = biased_model_result.best_model
###Output
ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FB4308>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D6FB4FC8>, embedding_type='GNews', learning_rate='0.000244140625', adversary_weight='0.1', debiased=True)
ModelResult(best_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A04309DBC8>, last_model=<adversarial_debiasing.AdversarialDebiasing object at 0x000001A1D8192B48>, embedding_type='GNews', learning_rate='0.000244140625', adversary_weight='0.1', debiased=False)
###Markdown
Qualitative Evaluation
###Code
# Get sexism traps as word embeddings and words
datapoints, test_analogies = qualitative_evaluation.get_datapoints(word_vectors)
# Predictions of the non debiased model
non_debiased_predictions = biased_model.predict(datapoints)
non_debiased_most_similar_list = utility_functions.obtain_most_similar(non_debiased_predictions, word_vectors)
# Predictions of the debiased model
debiased_predictions = debiased_model.predict(datapoints)
debiased_most_similar_list = utility_functions.obtain_most_similar(debiased_predictions, word_vectors)
# Print similarity results for both models
qualitative_evaluation.print_combined_table(non_debiased_most_similar_list, debiased_most_similar_list, test_analogies)
###Output
he : strong :: she :
+-------------+--------------+-------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|-------------+--------------+-------------+--------------|
| robust | 0.492 | robust | 0.504 |
| stong | 0.48 | stong | 0.498 |
| solid | 0.46 | solid | 0.488 |
| stronger | 0.453 | stronger | 0.476 |
| weak | 0.445 | strongest | 0.457 |
| strongest | 0.438 | weak | 0.449 |
| Strong | 0.432 | Strong | 0.447 |
| perky | 0.384 | STRONG | 0.388 |
| buoyant | 0.381 | resilient | 0.385 |
+-------------+--------------+-------------+--------------+
he : boss :: she :
+---------------------------+--------------+---------------------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|---------------------------+--------------+---------------------------+--------------|
| bosses | 0.529 | bosses | 0.539 |
| Kym_Marsh | 0.514 | Kym_Marsh | 0.506 |
| manageress | 0.507 | Corrie_babe | 0.505 |
| Corrie_babe | 0.499 | Bev_Callard | 0.485 |
| Coronation_Street_actress | 0.494 | Jane_Danson | 0.478 |
| Jane_Danson | 0.489 | manageress | 0.473 |
| Manageress | 0.483 | Coronation_Street_actress | 0.47 |
| stepmum | 0.48 | Vicky_Entwistle | 0.469 |
| Vicky_Entwistle | 0.479 | Danielle_Lineker | 0.466 |
+---------------------------+--------------+---------------------------+--------------+
he : company :: she :
+------------------------+--------------+-----------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|------------------------+--------------+-----------------+--------------|
| companies | 0.476 | companies | 0.479 |
| Carol_Hively_Walgreens | 0.463 | compay | 0.466 |
| compay | 0.461 | companys | 0.457 |
| Linda_McGillen | 0.455 | Linda_McGillen | 0.454 |
| Jani_Strand | 0.452 | Jafra_Cosmetics | 0.453 |
| com_pany | 0.452 | Dana_Lengkeek | 0.45 |
| ClubJenna | 0.449 | comapny | 0.449 |
| Jafra_Cosmetics | 0.446 | com_pany | 0.445 |
| Park_Seong_ae | 0.445 | Heidi_Magyar | 0.438 |
+------------------------+--------------+-----------------+--------------+
he : athletic :: she :
+--------------------------+--------------+--------------------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|--------------------------+--------------+--------------------------+--------------|
| athletics | 0.63 | athletics | 0.624 |
| athletic_director | 0.508 | atheltic | 0.516 |
| atheltic | 0.507 | athletic_director | 0.502 |
| softball | 0.506 | basketball | 0.491 |
| volleyball | 0.497 | intercollegiate_athletic | 0.479 |
| intercollegiate_athletic | 0.495 | softball | 0.477 |
| basketball | 0.492 | Athletic_Director | 0.476 |
| gymnastics | 0.491 | volleyball | 0.469 |
| Athletic_Director | 0.478 | gymnastics | 0.463 |
+--------------------------+--------------+--------------------------+--------------+
he : doctor :: she :
+--------------------+--------------+--------------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|--------------------+--------------+--------------------+--------------|
| nurse | 0.656 | gynecologist | 0.626 |
| gynecologist | 0.649 | nurse_practitioner | 0.592 |
| nurse_practitioner | 0.626 | nurse | 0.592 |
| midwife | 0.602 | pediatrician | 0.567 |
| pediatrician | 0.597 | doctors | 0.563 |
| dermatologist | 0.56 | midwife | 0.563 |
| ob_gyn | 0.557 | ob_gyn | 0.557 |
| pharmacist | 0.553 | physician | 0.556 |
| obstetrician | 0.553 | dermatologist | 0.543 |
+--------------------+--------------+--------------------+--------------+
he : leader :: she :
+------------------------+--------------+------------------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|------------------------+--------------+------------------------+--------------|
| chairwoman | 0.483 | chairwoman | 0.469 |
| businesswoman | 0.43 | Leader | 0.422 |
| Leader | 0.409 | leadership | 0.417 |
| Chairwoman | 0.398 | chairperson | 0.396 |
| Eldest_daughter | 0.397 | Chairwoman | 0.392 |
| chairperson | 0.395 | leaders | 0.385 |
| leadership | 0.394 | businesswoman | 0.384 |
| leader_Rosa_Otunbayeva | 0.392 | leader_Rosa_Otunbayeva | 0.383 |
| Cushman_Titus | 0.391 | stateswoman | 0.377 |
+------------------------+--------------+------------------------+--------------+
he : director :: she :
+--------------------+--------------+--------------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|--------------------+--------------+--------------------+--------------|
| chairwoman | 0.661 | chairwoman | 0.65 |
| coordinator | 0.627 | coordinator | 0.637 |
| Executive_Director | 0.568 | Executive_Director | 0.59 |
| co_ordinator | 0.546 | vice_president | 0.566 |
| chairperson | 0.543 | co_ordinator | 0.556 |
| vice_president | 0.541 | Director | 0.553 |
| Associate_Director | 0.537 | vp | 0.551 |
| Director | 0.535 | Associate_Director | 0.549 |
| direc_tor | 0.53 | chairperson | 0.546 |
+--------------------+--------------+--------------------+--------------+
he : rich :: she :
+---------------------------+--------------+---------------------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|---------------------------+--------------+---------------------------+--------------|
| friend_Francie_Vos | 0.51 | friend_Francie_Vos | 0.499 |
| Melamine_nitrogen | 0.469 | richer | 0.488 |
| Grace_Gina_Mantegna | 0.456 | Melamine_nitrogen | 0.463 |
| richer | 0.45 | wealthy | 0.447 |
| wealthy | 0.446 | Autonomy_Virage_visionary | 0.428 |
| Scicasts_Resource_Library | 0.438 | richness | 0.425 |
| Autonomy_Virage_visionary | 0.425 | Scicasts_Resource_Library | 0.425 |
| kissable_lips | 0.414 | kissable_lips | 0.419 |
| heiresses | 0.414 | Grace_Gina_Mantegna | 0.418 |
+---------------------------+--------------+---------------------------+--------------+
he : pilot :: she :
+--------------------+--------------+-------------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|--------------------+--------------+-------------------+--------------|
| flight_attendant | 0.479 | pilots | 0.469 |
| Pilot | 0.473 | Pilot | 0.454 |
| pilots | 0.461 | Wash_Alan_Tudyk | 0.451 |
| relaxed_el_Amruni | 0.458 | flight_attendant | 0.447 |
| Wash_Alan_Tudyk | 0.457 | Cathy_Bossi | 0.447 |
| Curt_Piercy | 0.449 | relaxed_el_Amruni | 0.447 |
| Aileen_McGlynn | 0.448 | Aileen_McGlynn | 0.446 |
| airline_stewardess | 0.446 | aborts_landing | 0.418 |
| Cathy_Bossi | 0.444 | piloting | 0.417 |
+--------------------+--------------+-------------------+--------------+
he : captain :: she :
+-----------------+--------------+--------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|-----------------+--------------+--------------+--------------|
| captian | 0.525 | captian | 0.555 |
| captains | 0.521 | captains | 0.553 |
| netballer | 0.488 | skipper | 0.493 |
| skipper | 0.486 | Di_Alagich | 0.467 |
| de_Zwager | 0.476 | Hockeyroo | 0.464 |
| Hockeyroo | 0.472 | Lizzy_Igasan | 0.461 |
| Eldest_daughter | 0.463 | captained | 0.46 |
| yachtswoman | 0.463 | Karen_Rolton | 0.453 |
| Netballer | 0.462 | netballer | 0.452 |
+-----------------+--------------+--------------+--------------+
he : president :: she :
+--------------------+--------------+--------------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|--------------------+--------------+--------------------+--------------|
| chairwoman | 0.606 | President | 0.602 |
| President | 0.581 | chairwoman | 0.595 |
| chairperson | 0.546 | chairperson | 0.551 |
| vice_president | 0.504 | vice_president | 0.53 |
| Executive_Director | 0.499 | Executive_Director | 0.523 |
| Chairwoman | 0.498 | executive | 0.505 |
| Vice_President | 0.488 | Vice_President | 0.504 |
| Chairperson | 0.485 | Chairwoman | 0.494 |
| executive | 0.481 | copresident | 0.488 |
+--------------------+--------------+--------------------+--------------+
he : power :: she :
+-----------------------+--------------+-----------------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|-----------------------+--------------+-----------------------+--------------|
| electricity | 0.451 | Power | 0.453 |
| Power | 0.447 | electricity | 0.443 |
| Lorie_Kessler | 0.393 | POWER | 0.392 |
| utility_Zesa_Holdings | 0.389 | Conscientiously_wield | 0.391 |
| NTSB_Airliner_engines | 0.386 | electricy | 0.385 |
| Concentrated_solar | 0.386 | Lorie_Kessler | 0.383 |
| Dorothy_Bracken | 0.386 | Concentrated_solar | 0.383 |
| Guynn_Savage | 0.386 | Guynn_Savage | 0.378 |
| cable_splices | 0.384 | Sostanj_coal_fired | 0.378 |
+-----------------------+--------------+-----------------------+--------------+
he : rational :: she :
+-----------------+--------------+-----------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|-----------------+--------------+-----------------+--------------|
| irrational | 0.505 | rationality | 0.512 |
| rationality | 0.504 | irrational | 0.509 |
| rationally | 0.504 | rationally | 0.506 |
| rational_beings | 0.461 | rational_beings | 0.474 |
| Dear_Worried | 0.458 | sensible | 0.467 |
| sensible | 0.45 | sane | 0.461 |
| pathologize | 0.445 | sane_rational | 0.459 |
| sane | 0.441 | Dear_Worried | 0.456 |
| sane_rational | 0.44 | pathologize | 0.451 |
+-----------------+--------------+-----------------+--------------+
he : confident :: she :
+-----------------------+--------------+-----------------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|-----------------------+--------------+-----------------------+--------------|
| hopeful | 0.511 | hopeful | 0.522 |
| optimistic | 0.503 | optimistic | 0.519 |
| cautiously_optimistic | 0.5 | cautiously_optimistic | 0.515 |
| pleased | 0.462 | pleased | 0.474 |
| thrilled | 0.46 | thrilled | 0.472 |
| convinced | 0.452 | excited | 0.464 |
| excited | 0.449 | convinced | 0.464 |
| guardedly_optimistic | 0.443 | guardedly_optimistic | 0.453 |
| delighted | 0.439 | delighted | 0.45 |
+-----------------------+--------------+-----------------------+--------------+
he : hard :: she :
+-------------+--------------+----------------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|-------------+--------------+----------------------+--------------|
| harder | 0.497 | harder | 0.53 |
| she | 0.444 | Hard | 0.464 |
| Hard | 0.441 | difficult | 0.463 |
| difficult | 0.43 | tough | 0.459 |
| tough | 0.42 | hardest | 0.416 |
| her | 0.414 | she | 0.394 |
| hardest | 0.398 | SERENA_WILLIAMS_Yeah | 0.389 |
| bangin_bod | 0.392 | skinny_waif | 0.383 |
| Analeigh | 0.391 | HARD | 0.379 |
+-------------+--------------+----------------------+--------------+
he : relaxed :: she :
+-------------------+--------------+-------------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|-------------------+--------------+-------------------+--------------|
| relaxing | 0.523 | relaxing | 0.519 |
| relax | 0.487 | relax | 0.504 |
| ladylike | 0.463 | comfortable | 0.46 |
| effortlessly_chic | 0.462 | looser | 0.459 |
| demure | 0.459 | ladylike | 0.458 |
| princessy | 0.452 | demure | 0.457 |
| she | 0.448 | Relaxed | 0.451 |
| floral_frock | 0.447 | princessy | 0.449 |
| perky | 0.446 | effortlessly_chic | 0.447 |
+-------------------+--------------+-------------------+--------------+
he : cry :: she :
+---------------------+--------------+---------------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|---------------------+--------------+---------------------+--------------|
| crying | 0.544 | crying | 0.557 |
| cries | 0.524 | cries | 0.537 |
| nurse_Lalitha_Gujar | 0.514 | nurse_Lalitha_Gujar | 0.51 |
| sob | 0.483 | scream | 0.492 |
| spoons_powdered | 0.479 | weep | 0.487 |
| bawling | 0.476 | sob | 0.48 |
| cry_hysterically | 0.474 | bawling | 0.479 |
| weep | 0.471 | spoons_powdered | 0.478 |
| pompom_fiasco | 0.462 | cried | 0.466 |
+---------------------+--------------+---------------------+--------------+
he : brave :: she :
+-------------------+--------------+-------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|-------------------+--------------+-------------+--------------|
| courageous | 0.562 | courageous | 0.577 |
| bravely | 0.488 | bravest | 0.491 |
| bravest | 0.476 | bravely | 0.489 |
| sexy_sassy | 0.451 | sexy_sassy | 0.453 |
| brave_souls | 0.446 | valiant | 0.444 |
| Sally_Dynevor | 0.437 | brave_souls | 0.436 |
| Chihuahua_Bruiser | 0.436 | courage | 0.436 |
| heroine | 0.435 | ballsy | 0.431 |
| valiant | 0.433 | gallant | 0.426 |
+-------------------+--------------+-------------+--------------+
he : intelligent :: she :
+------------------------+--------------+------------------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|------------------------+--------------+------------------------+--------------|
| WHAT_MAKES_HER_SPECIAL | 0.467 | smart | 0.468 |
| vivacious | 0.457 | socially_adept | 0.455 |
| she'sa | 0.456 | WHAT_MAKES_HER_SPECIAL | 0.451 |
| bossy | 0.45 | bossy | 0.451 |
| wheelchair_TAO | 0.444 | she'sa | 0.446 |
| socially_adept | 0.444 | sexy_sassy | 0.445 |
| perky_blond | 0.442 | wheelchair_TAO | 0.444 |
| sexy_sassy | 0.44 | vivacious | 0.441 |
| everywoman | 0.435 | Rufus_Johnstone | 0.441 |
+------------------------+--------------+------------------------+--------------+
he : ambitious :: she :
+----------------------+--------------+-------------------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|----------------------+--------------+-------------------------+--------------|
| Ambitious | 0.542 | Ambitious | 0.557 |
| black_sequined_bras | 0.533 | black_sequined_bras | 0.538 |
| floundered_Mercado | 0.51 | floundered_Mercado | 0.5 |
| swifter_glossier | 0.499 | swifter_glossier | 0.498 |
| Dianna_Agron_Quinn | 0.492 | overly_ambitious | 0.492 |
| local_teen_determ | 0.482 | Dianna_Agron_Quinn | 0.48 |
| overly_ambitious | 0.472 | local_teen_determ | 0.466 |
| Teen_Sells_Bracelets | 0.454 | Wazzani_Fortress_resort | 0.454 |
| commoner_marrying | 0.452 | Teen_Sells_Bracelets | 0.447 |
+----------------------+--------------+-------------------------+--------------+
man : woman :: boss :
+---------------------------+--------------+-------------+--------------+
| Biased | Biased | Debiased | Debiased |
| Neighbour | Similarity | Neighbour | Similarity |
|---------------------------+--------------+-------------+--------------|
| bosses | 0.551 | bosses | 0.557 |
| manageress | 0.493 | exec | 0.474 |
| exec | 0.458 | supremo | 0.466 |
| Manageress | 0.457 | head_honcho | 0.463 |
| receptionist | 0.451 | manageress | 0.463 |
| Jane_Danson | 0.444 | honcho | 0.437 |
| Fiz_Jennie_McAlpine | 0.441 | Jane_Danson | 0.431 |
| Coronation_Street_actress | 0.44 | Manageress | 0.431 |
| coworker | 0.439 | Sandeesh | 0.43 |
+---------------------------+--------------+-------------+--------------+
|
examples/imputers/NearestMeanResponseImputer.ipynb | ###Markdown
NearestMeanResponseImputerThis notebook shows the functionality of the NearestMeanResponseImputer class. This transformer takes the mean of the response column for each value present in the column to be imputed, it then compares these values to the mean of the response column for the null entries, and finally fills nulls with the value for which the two means are closest.
###Code
import pandas as pd
import numpy as np
from sklearn.datasets import fetch_california_housing
import tubular
from tubular.imputers import NearestMeanResponseImputer
tubular.__version__
###Output
_____no_output_____
###Markdown
MotivationThis transformer is designed to fill null values with the value for which they will have the least impact on the mean of the response, below is a simple motivating example.Using column "b" as the response, we first produce a plot of the mean response for each none null value.
###Code
df=pd.DataFrame(
{
"a": [1, 1, 1, 2, 2, 3, np.nan],
"b": [6, 2, 1, 5, 7, 5, 6]
}
)
ax=df[df.notnull()].groupby('a').mean().plot(xticks=[1,2,3], ylim=(3,6),legend=False)
ax.set_ylabel('mean_b')
###Output
_____no_output_____
###Markdown
Next we fill the null values with their nearest neighbour, notice in this plot that we have shifted the mean for the leftmost group.
###Code
df=pd.DataFrame(
{
"a": [1, 1, 1, 2, 2, 3, 1],
"b": [6, 2, 1, 5, 7, 5, 6]
}
)
ax=df[df.notnull()].groupby('a').mean().plot(xticks=[1,2,3], ylim=(3,6), legend=False)
ax.set_ylabel("mean_b")
###Output
_____no_output_____
###Markdown
Finally we fill the null values as NearestMeanResponseImputer would, notice that in this plot the group means agree with those in the initial plot.
###Code
df=pd.DataFrame(
{
"a": [1, 1, 1, 2, 2, 3, 2],
"b": [6, 2, 1, 5, 7, 5, 6]
}
)
ax = df[df.notnull()].groupby('a').mean().plot(xticks=[1,2,3], ylim=(3,6), legend=False)
ax.set_ylabel("mean_b")
###Output
_____no_output_____
###Markdown
Load California housing dataset from sklearn
###Code
cali = fetch_california_housing()
cali_df = pd.DataFrame(cali['data'], columns=cali['feature_names'])
cali_df['target'] = cali['target']
cali_df['HouseAge'] = cali_df['HouseAge'].sample(frac=0.95, random_state=2)
cali_df['Population'] = cali_df['Population'].sample(frac=0.995, random_state=3)
cali_df.head()
cali_df.isnull().sum()
###Output
_____no_output_____
###Markdown
Simple Usage Initialising NearestMeanResponseimputer
###Code
imp1 = NearestMeanResponseImputer(
response_column='target',
columns=['HouseAge', 'Population'],
copy=True,
verbose=True
)
###Output
BaseTransformer.__init__() called
###Markdown
NearestMeanResponseImputer FitThe fit method for NearestMeanResponseImputer must be run before the transform method. It computes the values which each relevant columns null entries will be imputed with, which are stored as an attribute called impute_values_. This attribute is a dictionary with keys matching the relevant column names.
###Code
imp1.fit(cali_df)
imp1.impute_values_
###Output
_____no_output_____
###Markdown
NearestMeanResponseImputer TransformThe transform method for NearestMeanResponseImputer takes a pandas dataframe as input and will fill null values in the relevant columns with the impute values learned in the fit step.
###Code
cali_df2=imp1.transform(cali_df)
cali_df2.isnull().sum()
###Output
_____no_output_____
###Markdown
NearestMeanResponseImputerThis notebook shows the functionality of the NearestMeanResponseImputer class. This transformer takes the mean of the response column for each value present in the column to be imputed, it then compares these values to the mean of the response column for the null entries, and finally fills nulls with the value for which the two means are closest.
###Code
import pandas as pd
import numpy as np
import tubular
from tubular.imputers import NearestMeanResponseImputer
tubular.__version__
###Output
_____no_output_____
###Markdown
MotivationThis transformer is designed to fill null values with the value for which they will have the least impact on the mean of the response, below is a simple motivating example.Using column "b" as the response, we first produce a plot of the mean response for each none null value.
###Code
df=pd.DataFrame(
{
"a": [1, 1, 1, 2, 2, 3, np.nan],
"b": [6, 2, 1, 5, 7, 5, 6]
}
)
ax=df[df.notnull()].groupby('a').mean().plot(xticks=[1,2,3], ylim=(3,6),legend=False)
ax.set_ylabel('mean_b')
###Output
_____no_output_____
###Markdown
Next we fill the null values with their nearest neighbour, notice in this plot that we have shifted the mean for the leftmost group.
###Code
df=pd.DataFrame(
{
"a": [1, 1, 1, 2, 2, 3, 1],
"b": [6, 2, 1, 5, 7, 5, 6]
}
)
ax=df[df.notnull()].groupby('a').mean().plot(xticks=[1,2,3], ylim=(3,6), legend=False)
ax.set_ylabel("mean_b")
###Output
_____no_output_____
###Markdown
Finally we fill the null values as NearestMeanResponseImputer would, notice that in this plot the group means agree with those in the initial plot.
###Code
df=pd.DataFrame(
{
"a": [1, 1, 1, 2, 2, 3, 2],
"b": [6, 2, 1, 5, 7, 5, 6]
}
)
ax = df[df.notnull()].groupby('a').mean().plot(xticks=[1,2,3], ylim=(3,6), legend=False)
ax.set_ylabel("mean_b")
###Output
_____no_output_____
###Markdown
Load Boston house price dataset from sklearnNote, the load_boston script modifies the original Boston dataset to include nulls values and pandas categorical dtypes.
###Code
boston_df = tubular.testing.test_data.prepare_boston_df()
boston_df.shape
boston_df.head()
boston_df.isnull().sum()
###Output
_____no_output_____
###Markdown
Simple Usage Initialising NearestMeanResponseimputer
###Code
imp1 = NearestMeanResponseImputer(
response_column='target',
columns=['CRIM', 'ZN'],
copy=True,
verbose=True
)
###Output
BaseTransformer.__init__() called
###Markdown
NearestMeanResponseImputer FitThe fit method for NearestMeanResponseImputer must be run before the transform method. It computes the values which each relevant columns null entries will be imputed with, which are stored as an attribute called impute_values_. This attribute is a dictionary with keys matching the relevant column names.
###Code
imp1.fit(boston_df)
imp1.impute_values_
###Output
_____no_output_____
###Markdown
NearestMeanResponseImputer TransformThe transform method for NearestMeanResponseImputer takes a pandas dataframe as input and will fill null values in the relevant columns with the impute values learned in the fit step.
###Code
boston_df2=imp1.transform(boston_df)
boston_df2.isnull().sum()
###Output
_____no_output_____
###Markdown
Alternate UsageWe can also use this transformer in the event that we want to fill null values in our test set with impute values learned from our training set. In particular, if our training set contains no null values and our test set does, we can specify use_median_if_no_nulls as True in the fit stage so that our imputer will learn the median values of our training columns.
###Code
df_train=boston_df[boston_df['CRIM'].notnull()]
df_test=boston_df[boston_df['CRIM'].isnull()]
df_test.head()
df_train['CRIM'].median()
###Output
_____no_output_____
###Markdown
Initialising NearestMeanResponseImputer
###Code
imp_2=NearestMeanResponseImputer(
response_column='target',
columns='CRIM',
use_median_if_no_nulls=True,
copy=True,
verbose=True
)
###Output
BaseTransformer.__init__() called
###Markdown
NearestMeanResponseImputer Fit
###Code
imp_2.fit(df_train)
imp_2.impute_values_
###Output
_____no_output_____
###Markdown
NearestMeanResponseImputer Transform
###Code
df_test2 = imp_2.transform(df_test)
df_test2.head()
###Output
BaseTransformer.transform() called
|
Visualizing-COVID-19/visualizing-covid-19.ipynb | ###Markdown
1. From epidemic to pandemicIn December 2019, COVID-19 coronavirus was first identified in the Wuhan region of China. By March 11, 2020, the World Health Organization (WHO) categorized the COVID-19 outbreak as a pandemic. A lot has happened in the months in between with major outbreaks in Iran, South Korea, and Italy. We know that COVID-19 spreads through respiratory droplets, such as through coughing, sneezing, or speaking. But, how quickly did the virus spread across the globe? And, can we see any effect from country-wide policies, like shutdowns and quarantines? Fortunately, organizations around the world have been collecting data so that governments can monitor and learn from this pandemic. Notably, the Johns Hopkins University Center for Systems Science and Engineering created a publicly available data repository to consolidate this data from sources like the WHO, the Centers for Disease Control and Prevention (CDC), and the Ministry of Health from multiple countries.In this notebook, you will visualize COVID-19 data from the first several weeks of the outbreak to see at what point this virus became a global pandemic.Please note that information and data regarding COVID-19 is frequently being updated. The data used in this project was pulled on March 17, 2020, and should not be considered to be the most up to date data available.
###Code
# plots options
options(repr.plot.width = 8, repr.plot.height = 4, repr.plot.res = 300)
# Load the readr, ggplot2, and dplyr packages
library(readr)
library(ggplot2)
library(dplyr)
# Read datasets/confirmed_cases_worldwide.csv into confirmed_cases_worldwide
confirmed_cases_worldwide <- read_csv("datasets/confirmed_cases_worldwide.csv")
# See the result
head(confirmed_cases_worldwide)
###Output
[36m--[39m [1m[1mColumn specification[1m[22m [36m------------------------------------------------------------------------------------------------[39m
cols(
date = [34mcol_date(format = "")[39m,
cum_cases = [32mcol_double()[39m
)
###Markdown
2. Confirmed cases throughout the worldThe table above shows the cumulative confirmed cases of COVID-19 worldwide by date. Just reading numbers in a table makes it hard to get a sense of the scale and growth of the outbreak. Let's draw a line plot to visualize the confirmed cases worldwide.
###Code
# Draw a line plot of cumulative cases vs. date
# Label the y-axis
ggplot(confirmed_cases_worldwide, aes(x = date, y = cum_cases)) +
geom_line() +
ylab("Cumulative confirmed cases") +
theme_bw()
###Output
_____no_output_____
###Markdown
3. China compared to the rest of the worldThe y-axis in that plot is pretty scary, with the total number of confirmed cases around the world approaching 200,000. Beyond that, some weird things are happening: there is an odd jump in mid February, then the rate of new cases slows down for a while, then speeds up again in March. We need to dig deeper to see what is happening.Early on in the outbreak, the COVID-19 cases were primarily centered in China. Let's plot confirmed COVID-19 cases in China and the rest of the world separately to see if it gives us any insight.We'll build on this plot in future tasks. One thing that will be important for the following tasks is that you add aesthetics within the line geometry of your ggplot, rather than making them global aesthetics.
###Code
# Read in datasets/confirmed_cases_china_vs_world.csv
confirmed_cases_china_vs_world <- read_csv("datasets/confirmed_cases_china_vs_world.csv")
# See the result
head(confirmed_cases_china_vs_world, 10)
# Draw a line plot of cumulative cases vs. date, colored by is_china
# Define aesthetics within the line geom
plt_cum_confirmed_cases_china_vs_world <- ggplot(confirmed_cases_china_vs_world) +
geom_line(aes(x = date, y = cum_cases, color = is_china)) +
ylab("Cumulative confirmed cases") +
theme_bw()
# See the plot
plt_cum_confirmed_cases_china_vs_world
###Output
_____no_output_____
###Markdown
4. Let's annotate!Wow! The two lines have very different shapes. In February, the majority of cases were in China. That changed in March when it really became a global outbreak: around March 14, the total number of cases outside China overtook the cases inside China. This was days after the WHO declared a pandemic.There were a couple of other landmark events that happened during the outbreak. For example, the huge jump in the China line on February 13, 2020 wasn't just a bad day regarding the outbreak; China changed the way it reported figures on that day (CT scans were accepted as evidence for COVID-19, rather than only lab tests).By annotating events like this, we can better interpret changes in the plot.
###Code
who_events <- tribble(
~ date, ~ event,
"2020-01-30", "Global health\nemergency declared",
"2020-03-11", "Pandemic\ndeclared",
"2020-02-13", "China reporting\nchange"
) %>%
mutate(date = as.Date(date))
# Using who_events, add vertical dashed lines with an xintercept at date
# and text at date, labeled by event, and at 100000 on the y-axis
plt_cum_confirmed_cases_china_vs_world +
geom_vline(who_events, mapping = aes(xintercept = date), linetype = "dashed") +
geom_text(who_events, mapping = aes(x = date , y = 100000, label = event))
###Output
_____no_output_____
###Markdown
5. Adding a trend line to ChinaWhen trying to assess how big future problems are going to be, we need a measure of how fast the number of cases is growing. A good starting point is to see if the cases are growing faster or slower than linearly.There is a clear surge of cases around February 13, 2020, with the reporting change in China. However, a couple of days after, the growth of cases in China slows down. How can we describe COVID-19's growth in China after February 15, 2020?
###Code
# Filter for China, from Feb 15
china_after_feb15 <- confirmed_cases_china_vs_world %>%
filter(is_china == "China", date >= "2020-02-15")
head(china_after_feb15)
# Using china_after_feb15, draw a line plot cum_cases vs. date
# Add a smooth trend line using linear regression, no error bars
ggplot(china_after_feb15, aes(x = date, y = cum_cases)) +
geom_line() +
geom_smooth(method = "lm", se = FALSE) +
ylab("Cumulative confirmed cases") +
theme_bw()
###Output
`geom_smooth()` using formula 'y ~ x'
###Markdown
6. And the rest of the world?From the plot above, the growth rate in China is slower than linear. That's great news because it indicates China has at least somewhat contained the virus in late February and early March.How does the rest of the world compare to linear growth?
###Code
# Filter confirmed_cases_china_vs_world for not China
not_china <- confirmed_cases_china_vs_world %>%
filter(is_china == "Not China")
head(not_china)
# Using not_china, draw a line plot cum_cases vs. date
# Add a smooth trend line using linear regression, no error bars
plt_not_china_trend_lin <- ggplot(not_china, aes(x = date, y = cum_cases)) +
geom_line() +
geom_smooth(method = "lm", se = FALSE) +
ylab("Cumulative confirmed cases") +
theme_bw()
# See the result
plt_not_china_trend_lin
###Output
`geom_smooth()` using formula 'y ~ x'
###Markdown
7. Adding a logarithmic scaleFrom the plot above, we can see a straight line does not fit well at all, and the rest of the world is growing much faster than linearly. What if we added a logarithmic scale to the y-axis?
###Code
# Modify the plot to use a logarithmic scale on the y-axis
plt_not_china_trend_lin +
scale_y_log10()
###Output
`geom_smooth()` using formula 'y ~ x'
###Markdown
8. Which countries outside of China have been hit hardest?With the logarithmic scale, we get a much closer fit to the data. From a data science point of view, a good fit is great news. Unfortunately, from a public health point of view, that means that cases of COVID-19 in the rest of the world are growing at an exponential rate, which is terrible news.Not all countries are being affected by COVID-19 equally, and it would be helpful to know where in the world the problems are greatest. Let's find the countries outside of China with the most confirmed cases in our dataset.
###Code
# Run this to get the data for each country
confirmed_cases_by_country <- read_csv("datasets/confirmed_cases_by_country.csv")
glimpse(confirmed_cases_by_country)
# Group by country, summarize to calculate total cases, find the top 7
top_countries_by_total_cases <- confirmed_cases_by_country %>%
group_by(country) %>%
summarise(total_cases = max(cum_cases)) %>%
top_n(7, total_cases)
# See the result
top_countries_by_total_cases
###Output
_____no_output_____
###Markdown
9. Plotting hardest hit countries as of Mid-March 2020Even though the outbreak was first identified in China, there is only one country from East Asia (South Korea) in the above table. Four of the listed countries (France, Germany, Italy, and Spain) are in Europe and share borders. To get more context, we can plot these countries' confirmed cases over time.Finally, congratulations on getting to the last step! If you would like to continue making visualizations or find the hardest hit countries as of today, you can do your own analyses with the latest data available here.
###Code
# Read in the dataset from datasets/confirmed_cases_top7_outside_china.csv
confirmed_cases_top7_outside_china <- read_csv("datasets/confirmed_cases_top7_outside_china.csv")
# Glimpse at the contents of confirmed_cases_top7_outside_china
glimpse(confirmed_cases_top7_outside_china)
# Using confirmed_cases_top7_outside_china, draw a line plot of
# cum_cases vs. date, colored by country
ggplot(confirmed_cases_top7_outside_china, aes(x = date, y = cum_cases, color = country)) +
geom_line() +
ylab("Cumulative confirmed cases") +
theme_bw()
###Output
_____no_output_____ |
bcnet_network_setup.ipynb | ###Markdown
Bitcoin Transaction Network Characterization and Basic Analysis
###Code
import blocksci
import pandas as pd
import numpy as np
import networkx as nx
%matplotlib notebook
# Point to parsed blockchain data
chain = blocksci.Blockchain("/home/ubuntu/bitcoin")
###Output
_____no_output_____
###Markdown
Network Characterization Clustering
###Code
ClustMan=blocksci.cluster.ClusterManager("/home/ubuntu/bitcoin/clusters/",chain)
clusters=ClustMan.clusters()
cluster_ix=clusters.index
# Extract blocks
blocks=chain.range(start='2009-01-01 00:00:00',end='2011-12-31 23:59:59')
# Extract addresses from blocks
txs=blocks.txes
# Extract addresses from range blocks
addresses=blocks.outputs.address
init_addresses=set([])
for address in addresses:
init_addresses.add(address)
print(len(init_addresses))
# Create set of clusters associated with addresses
init_clusters=set([])
add_clust_dic={}
for address in init_addresses:
cluster_i=ClustMan.cluster_with_address(address)
init_clusters.add(cluster_i)
add_clust_dic[address.address_num]=cluster_i # Different addresses might have the same internal address number
print(len(init_clusters))
print(len(add_clust_dic))
# Create Dictionary {address_num:{tx where add is input}}
add_txin={}
for tx in txs:
for address_num in tx.inputs.address.address_num:
try:
add_txin[address_num].add(tx.index)
except KeyError:
add_txin[address_num]=set([])
add_txin[address_num].add(tx.index)
except AttributeError:
add_txin[address_num]=set([])
add_txin[address_num].add(tx.index)
print(list(add_txin.keys())[:10])
print(add_txin[242])
# Create Dictionary {address_num:{tx where add is output}}
add_txout={}
for tx in txs:
for address_num in tx.outputs.address.address_num:
try:
add_txout[address_num].add(tx.index)
except KeyError:
add_txout[address_num]=set([])
add_txout[address_num].add(tx.index)
except AttributeError:
add_txout[address_num]=set([])
add_txout[address_num].add(tx.index)
print(list(add_txout.keys())[:10])
print(add_txout[2023333])
# Create graph edges
%time
for cluster in clusters:
for address_num in cluster.addresses.address_num:
try:
for tx in add_txin[address_num]:
for address_no in chain.tx_with_index(tx).outputs.address.address_num:
edge_i=(cluster.index,add_clust_dic[address_no].index)
except KeyError:
continue
for cluster in init_clusters:
address_clust_i=cluster.outs.address
for address in address_clust_i:
cluster
###Output
_____no_output_____
###Markdown
Define nodes and edges
###Code
# Define graph object and add nodes
bc_graph=nx.Graph()
bc_graph.add_nodes_from(init_clusters)
print(bc_graph.number_of_nodes())
###Output
874968
|
dev_course/dl2/06_cuda_cnn_hooks_init.ipynb | ###Markdown
ConvNet
###Code
x_train,y_train,x_valid,y_valid = get_data()
###Output
_____no_output_____
###Markdown
Helper function to quickly normalize with the mean and standard deviation from our training set:
###Code
#export
def normalize_to(train, valid):
m,s = train.mean(),train.std()
return normalize(train, m, s), normalize(valid, m, s)
x_train,x_valid = normalize_to(x_train,x_valid)
train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)
###Output
_____no_output_____
###Markdown
Let's check it behaved properly.
###Code
x_train.mean(),x_train.std()
nh,bs = 50,512
c = y_train.max().item()+1
loss_func = F.cross_entropy
data = DataBunch(*get_dls(train_ds, valid_ds, bs), c)
###Output
_____no_output_____
###Markdown
To refactor layers, it's useful to have a `Lambda` layer that can take a basic function and convert it to a layer you can put in `nn.Sequential`.NB: if you use a Lambda layer with a lambda function, your model won't pickle so you won't be able to save it with PyTorch. So it's best to give a name to the function you're using inside your Lambda (like flatten below).
###Code
#export
class Lambda(nn.Module):
def __init__(self, func):
super().__init__()
self.func = func
def forward(self, x): return self.func(x)
def flatten(x): return x.view(x.shape[0], -1)
###Output
_____no_output_____
###Markdown
This one takes the flat vector of size `bs x 784` and puts it back as a batch of images of 28 by 28 pixels:
###Code
def mnist_resize(x): return x.view(-1, 1, 28, 28)
###Output
_____no_output_____
###Markdown
We can now define a simple CNN.
###Code
def get_cnn_model(data):
return nn.Sequential(
Lambda(mnist_resize),
nn.Conv2d( 1, 8, 5, padding=2,stride=2), nn.ReLU(), #14
nn.Conv2d( 8,16, 3, padding=1,stride=2), nn.ReLU(), # 7
nn.Conv2d(16,32, 3, padding=1,stride=2), nn.ReLU(), # 4
nn.Conv2d(32,32, 3, padding=1,stride=2), nn.ReLU(), # 2
nn.AdaptiveAvgPool2d(1),
Lambda(flatten),
nn.Linear(32,data.c)
)
model = get_cnn_model(data)
###Output
_____no_output_____
###Markdown
Basic callbacks from the previous notebook:
###Code
cbfs = [Recorder, partial(AvgStatsCallback,accuracy)]
opt = optim.SGD(model.parameters(), lr=0.4)
learn = Learner(model, opt, loss_func, data)
run = Runner(cb_funcs=cbfs)
%time run.fit(1, learn)
###Output
train: [1.7832209375, tensor(0.3780)]
valid: [0.68908681640625, tensor(0.7742)]
CPU times: user 7.84 s, sys: 5.79 s, total: 13.6 s
Wall time: 5.87 s
###Markdown
CUDA This took a long time to run, so it's time to use a GPU. A simple Callback can make sure the model, inputs and targets are all on the same device.
###Code
# Somewhat more flexible way
device = torch.device('cuda',0)
class CudaCallback(Callback):
def __init__(self,device): self.device=device
def begin_fit(self): self.model.to(device)
def begin_batch(self): self.run.xb,self.run.yb = self.xb.to(device),self.yb.to(device)
# Somewhat less flexible, but quite convenient
torch.cuda.set_device(device)
#export
class CudaCallback(Callback):
def begin_fit(self): self.model.cuda()
def begin_batch(self): self.run.xb,self.run.yb = self.xb.cuda(),self.yb.cuda()
cbfs.append(CudaCallback)
model = get_cnn_model(data)
opt = optim.SGD(model.parameters(), lr=0.4)
learn = Learner(model, opt, loss_func, data)
run = Runner(cb_funcs=cbfs)
%time run.fit(3, learn)
###Output
train: [1.8033628125, tensor(0.3678, device='cuda:0')]
valid: [0.502658544921875, tensor(0.8599, device='cuda:0')]
train: [0.3883639453125, tensor(0.8856, device='cuda:0')]
valid: [0.205377734375, tensor(0.9413, device='cuda:0')]
train: [0.17645265625, tensor(0.9477, device='cuda:0')]
valid: [0.15847452392578126, tensor(0.9543, device='cuda:0')]
CPU times: user 4.36 s, sys: 1.07 s, total: 5.43 s
Wall time: 5.41 s
###Markdown
Now, that's definitely faster! Refactor model First we can regroup all the conv/relu in a single function:
###Code
def conv2d(ni, nf, ks=3, stride=2):
return nn.Sequential(
nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride), nn.ReLU())
###Output
_____no_output_____
###Markdown
Another thing is that we can do the mnist resize in a batch transform, that we can do with a Callback.
###Code
#export
class BatchTransformXCallback(Callback):
_order=2
def __init__(self, tfm): self.tfm = tfm
def begin_batch(self): self.run.xb = self.tfm(self.xb)
def view_tfm(*size):
def _inner(x): return x.view(*((-1,)+size))
return _inner
mnist_view = view_tfm(1,28,28)
cbfs.append(partial(BatchTransformXCallback, mnist_view))
###Output
_____no_output_____
###Markdown
With the `AdaptiveAvgPool`, this model can now work on any size input:
###Code
nfs = [8,16,32,32]
def get_cnn_layers(data, nfs):
nfs = [1] + nfs
return [
conv2d(nfs[i], nfs[i+1], 5 if i==0 else 3)
for i in range(len(nfs)-1)
] + [nn.AdaptiveAvgPool2d(1), Lambda(flatten), nn.Linear(nfs[-1], data.c)]
def get_cnn_model(data, nfs): return nn.Sequential(*get_cnn_layers(data, nfs))
###Output
_____no_output_____
###Markdown
And this helper function will quickly give us everything needed to run the training.
###Code
#export
def get_runner(model, data, lr=0.6, cbs=None, opt_func=None, loss_func = F.cross_entropy):
if opt_func is None: opt_func = optim.SGD
opt = opt_func(model.parameters(), lr=lr)
learn = Learner(model, opt, loss_func, data)
return learn, Runner(cb_funcs=listify(cbs))
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, data, lr=0.4, cbs=cbfs)
model
run.fit(3, learn)
###Output
train: [1.90592640625, tensor(0.3403, device='cuda:0')]
valid: [0.743217529296875, tensor(0.7483, device='cuda:0')]
train: [0.4440590625, tensor(0.8594, device='cuda:0')]
valid: [0.203494482421875, tensor(0.9409, device='cuda:0')]
train: [0.1977476953125, tensor(0.9397, device='cuda:0')]
valid: [0.13920831298828126, tensor(0.9606, device='cuda:0')]
###Markdown
Hooks Manual insertion Let's say we want to do some telemetry, and want the mean and standard deviation of each activations in the model. First we can do it manually like this:
###Code
class SequentialModel(nn.Module):
def __init__(self, *layers):
super().__init__()
self.layers = nn.ModuleList(layers)
self.act_means = [[] for _ in layers]
self.act_stds = [[] for _ in layers]
def __call__(self, x):
for i,l in enumerate(self.layers):
x = l(x)
self.act_means[i].append(x.data.mean())
self.act_stds [i].append(x.data.std ())
return x
def __iter__(self): return iter(self.layers)
model = SequentialModel(*get_cnn_layers(data, nfs))
learn,run = get_runner(model, data, lr=0.9, cbs=cbfs)
run.fit(2, learn)
###Output
train: [2.11050140625, tensor(0.2425, device='cuda:0')]
valid: [1.2921490234375, tensor(0.5014, device='cuda:0')]
train: [0.6482396875, tensor(0.7932, device='cuda:0')]
valid: [0.18447919921875, tensor(0.9439, device='cuda:0')]
###Markdown
Now we can have a look at the means and stds of the activations at the beginning of training.
###Code
for l in model.act_means: plt.plot(l)
plt.legend(range(6));
for l in model.act_stds: plt.plot(l)
plt.legend(range(6));
for l in model.act_means: plt.plot(l[:10])
plt.legend(range(6));
for l in model.act_stds: plt.plot(l[:10])
plt.legend(range(6));
###Output
_____no_output_____
###Markdown
Pytorch hooks Hooks are `PyTorch` object you can add to any `nn.Module`. They will be called when this particular is executed during the forward pass (forward hook) or the backward pass (backward hook).Hooks don't require us to rewrite the model.
###Code
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, data, lr=0.5, cbs=cbfs)
act_means = [[] for _ in model]
act_stds = [[] for _ in model]
###Output
_____no_output_____
###Markdown
A hook is attached to a layer, and needs to have a function that takes three argument (module, input, output). Here we store the mean and std of the output in the correct position of our list.
###Code
def append_stats(i, mod, inp, outp):
act_means[i].append(outp.data.mean())
act_stds [i].append(outp.data.std())
for i,m in enumerate(model): m.register_forward_hook(partial(append_stats, i))
run.fit(1, learn)
for o in act_means: plt.plot(o)
plt.legend(range(5));
###Output
_____no_output_____
###Markdown
Hook class We can refactor this in a Hook class. It's very important to remove the hooks when they are deleted, otherwise there will be references kept and the memory won't be properly released when your model is deleted.
###Code
#export
def children(m): return list(m.children())
class Hook():
def __init__(self, m, f): self.hook = m.register_forward_hook(partial(f, self))
def remove(self): self.hook.remove()
def __del__(self): self.remove()
def append_stats(hook, mod, inp, outp):
if not hasattr(hook,'stats'): hook.stats = ([],[])
means,stds = hook.stats
means.append(outp.data.mean())
stds .append(outp.data.std())
###Output
_____no_output_____
###Markdown
NB: In fastai we use a `bool` param to choose whether to make it a forward or backward hook. In the above version we're only supporting forward hooks.
###Code
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, data, lr=0.5, cbs=cbfs)
hooks = [Hook(l, append_stats) for l in children(model[:4])]
run.fit(1, learn)
for h in hooks:
plt.plot(h.stats[0])
h.remove()
plt.legend(range(4));
###Output
_____no_output_____
###Markdown
A Hooks class Let's design our own class that will can contain a list of objects. It behaves a bit like a numpy array in the sense that we can index it via:- a single index- a slice (like 1:5)- a list of indices- a mask of indices (`[True,False,False,True,...]`)The `__iter__` method is there to be able to do things like `for x in ...`.
###Code
#export
class ListContainer():
def __init__(self, items): self.items = listify(items)
def __getitem__(self, idx):
if isinstance(idx, (int,slice)): return self.items[idx]
if isinstance(idx[0],bool):
assert len(idx)==len(self) # bool mask
return [o for m,o in zip(idx,self.items) if m]
return [self.items[i] for i in idx]
def __len__(self): return len(self.items)
def __iter__(self): return iter(self.items)
def __setitem__(self, i, o): self.items[i] = o
def __delitem__(self, i): del(self.items[i])
def __repr__(self):
res = f'{self.__class__.__name__} ({len(self)} items)\n{self.items[:10]}'
if len(self)>10: res = res[:-1]+ '...]'
return res
ListContainer(range(10))
ListContainer(range(100))
t = ListContainer(range(10))
t[[1,2]], t[[False]*8 + [True,False]]
###Output
_____no_output_____
###Markdown
We can use it to write a `Hooks` class that contains several hooks. We will also use it in the next notebook as a container for our objects in the data block API.
###Code
#export
from torch.nn import init
class Hooks(ListContainer):
def __init__(self, ms, f): super().__init__([Hook(m, f) for m in ms])
def __enter__(self, *args): return self
def __exit__ (self, *args): self.remove()
def __del__(self): self.remove()
def __delitem__(self, i):
self[i].remove()
super().__delitem__(i)
def remove(self):
for h in self: h.remove()
model = get_cnn_model(data, nfs).cuda()
learn,run = get_runner(model, data, lr=0.9, cbs=cbfs)
hooks = Hooks(model, append_stats)
hooks
hooks.remove()
x,y = next(iter(data.train_dl))
x = mnist_resize(x).cuda()
x.mean(),x.std()
p = model[0](x)
p.mean(),p.std()
for l in model:
if isinstance(l, nn.Sequential):
init.kaiming_normal_(l[0].weight)
l[0].bias.data.zero_()
p = model[0](x)
p.mean(),p.std()
###Output
_____no_output_____
###Markdown
Having given an `__enter__` and `__exit__` method to our `Hooks` class, we can use it as a context manager. This makes sure that onces we are out of the `with` block, all the hooks have been removed and aren't there to pollute our memory.
###Code
with Hooks(model, append_stats) as hooks:
run.fit(2, learn)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss = h.stats
ax0.plot(ms[:10])
ax1.plot(ss[:10])
plt.legend(range(6));
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss = h.stats
ax0.plot(ms)
ax1.plot(ss)
plt.legend(range(6));
###Output
train: [1.31235171875, tensor(0.5528, device='cuda:0')]
valid: [0.2173892578125, tensor(0.9362, device='cuda:0')]
train: [0.192031640625, tensor(0.9398, device='cuda:0')]
valid: [0.1460028076171875, tensor(0.9572, device='cuda:0')]
###Markdown
Other statistics Let's store more then the means and stds and plot histograms of our activations now.
###Code
def append_stats(hook, mod, inp, outp):
if not hasattr(hook,'stats'): hook.stats = ([],[],[])
means,stds,hists = hook.stats
means.append(outp.data.mean().cpu())
stds .append(outp.data.std().cpu())
hists.append(outp.data.cpu().histc(40,0,10)) #histc isn't implemented on the GPU
model = get_cnn_model(data, nfs).cuda()
learn,run = get_runner(model, data, lr=0.9, cbs=cbfs)
for l in model:
if isinstance(l, nn.Sequential):
init.kaiming_normal_(l[0].weight)
l[0].bias.data.zero_()
with Hooks(model, append_stats) as hooks: run.fit(1, learn)
# Thanks to @ste for initial version of histgram plotting code
def get_hist(h): return torch.stack(h.stats[2]).t().float().log1p()
fig,axes = plt.subplots(2,2, figsize=(15,6))
for ax,h in zip(axes.flatten(), hooks[:4]):
ax.imshow(get_hist(h), origin='lower')
ax.axis('off')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
From the histograms, we can easily get more informations like the min or max of the activations
###Code
def get_min(h):
h1 = torch.stack(h.stats[2]).t().float()
return h1[:2].sum(0)/h1.sum(0)
fig,axes = plt.subplots(2,2, figsize=(15,6))
for ax,h in zip(axes.flatten(), hooks[:4]):
ax.plot(get_min(h))
ax.set_ylim(0,1)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Generalized ReLU Now let's use our model with a generalize ReLU that can be shifted and with maximum value.
###Code
#export
def get_cnn_layers(data, nfs, layer, **kwargs):
nfs = [1] + nfs
return [layer(nfs[i], nfs[i+1], 5 if i==0 else 3, **kwargs)
for i in range(len(nfs)-1)] + [
nn.AdaptiveAvgPool2d(1), Lambda(flatten), nn.Linear(nfs[-1], data.c)]
def conv_layer(ni, nf, ks=3, stride=2, **kwargs):
return nn.Sequential(
nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride), GeneralRelu(**kwargs))
class GeneralRelu(nn.Module):
def __init__(self, leak=None, sub=None, maxv=None):
super().__init__()
self.leak,self.sub,self.maxv = leak,sub,maxv
def forward(self, x):
x = F.leaky_relu(x,self.leak) if self.leak is not None else F.relu(x)
if self.sub is not None: x.sub_(self.sub)
if self.maxv is not None: x.clamp_max_(self.maxv)
return x
def init_cnn(m, uniform=False):
f = init.kaiming_uniform_ if uniform else init.kaiming_normal_
for l in m:
if isinstance(l, nn.Sequential):
f(l[0].weight, a=0.1)
l[0].bias.data.zero_()
def get_cnn_model(data, nfs, layer, **kwargs):
return nn.Sequential(*get_cnn_layers(data, nfs, layer, **kwargs))
def append_stats(hook, mod, inp, outp):
if not hasattr(hook,'stats'): hook.stats = ([],[],[])
means,stds,hists = hook.stats
means.append(outp.data.mean().cpu())
stds .append(outp.data.std().cpu())
hists.append(outp.data.cpu().histc(40,-7,7))
model = get_cnn_model(data, nfs, conv_layer, leak=0.1, sub=0.4, maxv=6.)
init_cnn(model)
learn,run = get_runner(model, data, lr=0.9, cbs=cbfs)
with Hooks(model, append_stats) as hooks:
run.fit(1, learn)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss,hi = h.stats
ax0.plot(ms[:10])
ax1.plot(ss[:10])
h.remove()
plt.legend(range(5));
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss,hi = h.stats
ax0.plot(ms)
ax1.plot(ss)
plt.legend(range(5));
fig,axes = plt.subplots(2,2, figsize=(15,6))
for ax,h in zip(axes.flatten(), hooks[:4]):
ax.imshow(get_hist(h), origin='lower')
ax.axis('off')
plt.tight_layout()
def get_min(h):
h1 = torch.stack(h.stats[2]).t().float()
return h1[19:22].sum(0)/h1.sum(0)
fig,axes = plt.subplots(2,2, figsize=(15,6))
for ax,h in zip(axes.flatten(), hooks[:4]):
ax.plot(get_min(h))
ax.set_ylim(0,1)
plt.tight_layout()
#export
def get_learn_run(nfs, data, lr, layer, cbs=None, opt_func=None, uniform=False, **kwargs):
model = get_cnn_model(data, nfs, layer, **kwargs)
init_cnn(model, uniform=uniform)
return get_runner(model, data, lr=lr, cbs=cbs, opt_func=opt_func)
sched = combine_scheds([0.5, 0.5], [sched_cos(0.2, 1.), sched_cos(1., 0.1)])
learn,run = get_learn_run(nfs, data, 1., conv_layer, cbs=cbfs+[partial(ParamScheduler,'lr', sched)])
run.fit(8, learn)
###Output
train: [1.177220859375, tensor(0.6270, device='cuda:0')]
valid: [0.331805712890625, tensor(0.8985, device='cuda:0')]
train: [0.3674151171875, tensor(0.8885, device='cuda:0')]
valid: [0.394902099609375, tensor(0.8691, device='cuda:0')]
train: [0.29181142578125, tensor(0.9135, device='cuda:0')]
valid: [0.12695498046875, tensor(0.9642, device='cuda:0')]
train: [0.11358849609375, tensor(0.9647, device='cuda:0')]
valid: [0.1171941650390625, tensor(0.9657, device='cuda:0')]
train: [0.0813043896484375, tensor(0.9754, device='cuda:0')]
valid: [0.102300390625, tensor(0.9715, device='cuda:0')]
train: [0.057199677734375, tensor(0.9825, device='cuda:0')]
valid: [0.07670272216796875, tensor(0.9786, device='cuda:0')]
train: [0.04207271484375, tensor(0.9870, device='cuda:0')]
valid: [0.06070926513671875, tensor(0.9811, device='cuda:0')]
train: [0.03412069091796875, tensor(0.9899, device='cuda:0')]
valid: [0.06048909301757813, tensor(0.9826, device='cuda:0')]
###Markdown
Uniform init may provide more useful initial weights (normal distribution put a lot of them at 0).
###Code
learn,run = get_learn_run(nfs, data, 1., conv_layer, uniform=True,
cbs=cbfs+[partial(ParamScheduler,'lr', sched)])
run.fit(8, learn)
###Output
train: [1.13958578125, tensor(0.6487, device='cuda:0')]
valid: [0.3293475341796875, tensor(0.8952, device='cuda:0')]
train: [0.3618896484375, tensor(0.8904, device='cuda:0')]
valid: [0.19215552978515624, tensor(0.9407, device='cuda:0')]
train: [0.20206876953125, tensor(0.9378, device='cuda:0')]
valid: [0.12095736083984375, tensor(0.9660, device='cuda:0')]
train: [0.123935849609375, tensor(0.9618, device='cuda:0')]
valid: [0.14329190673828124, tensor(0.9567, device='cuda:0')]
train: [0.10821904296875, tensor(0.9675, device='cuda:0')]
valid: [0.07789203491210937, tensor(0.9778, device='cuda:0')]
train: [0.0598996728515625, tensor(0.9809, device='cuda:0')]
valid: [0.07529915771484375, tensor(0.9769, device='cuda:0')]
train: [0.0429351416015625, tensor(0.9866, device='cuda:0')]
valid: [0.06512515869140625, tensor(0.9809, device='cuda:0')]
train: [0.0341603076171875, tensor(0.9898, device='cuda:0')]
valid: [0.06295247802734374, tensor(0.9822, device='cuda:0')]
###Markdown
Export Here's a handy way to export our module without needing to update the file name - after we define this, we can just use `nb_auto_export()` in the future (h/t Stas Bekman):
###Code
#export
from IPython.display import display, Javascript
def nb_auto_export():
display(Javascript("""{
const ip = IPython.notebook
if (ip) {
ip.save_notebook()
console.log('a')
const s = `!python notebook2script.py ${ip.notebook_name}`
if (ip.kernel) { ip.kernel.execute(s) }
}
}"""))
nb_auto_export()
###Output
_____no_output_____
###Markdown
ConvNet
###Code
x_train,y_train,x_valid,y_valid = get_data()
###Output
_____no_output_____
###Markdown
Helper function to quickly normalize with the mean and standard deviation from our training set:
###Code
#export
def normalize_to(train, valid):
m,s = train.mean(),train.std()
return normalize(train, m, s), normalize(valid, m, s)
x_train,x_valid = normalize_to(x_train,x_valid)
train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)
x_train.mean(),x_train.std()
nh,bs = 50,512
c = y_train.max().item()+1
loss_func = F.cross_entropy
data = DataBunch(*get_dls(train_ds, valid_ds, bs), c)
###Output
_____no_output_____
###Markdown
To refactor layers, it's useful to have a `Lambda` layer that can take a basic function and convert it to a layer you can put in `nn.Sequential`.NB: if you use a Lambda layer with a lambda function, your model won't pickle so you won't be able to save it with PyTorch. So it's best to give a name to the function you're using inside your Lambda (like flatten below).
###Code
#export
class Lambda(nn.Module):
def __init__(self, func):
super().__init__()
self.func = func
def forward(self, x): return self.func(x)
def flatten(x): return x.view(x.shape[0], -1)
def mnist_resize(x): return x.view(-1, 1, 28, 28)
###Output
_____no_output_____
###Markdown
We can now define a simple CNN.
###Code
def get_cnn_model(data):
return nn.Sequential(
Lambda(mnist_resize),
nn.Conv2d( 1, 8, 5, padding=2,stride=2), nn.ReLU(), #14
nn.Conv2d( 8,16, 3, padding=1,stride=2), nn.ReLU(), # 7
nn.Conv2d(16,32, 3, padding=1,stride=2), nn.ReLU(), # 4
nn.Conv2d(32,32, 3, padding=1,stride=2), nn.ReLU(), # 2
nn.AdaptiveAvgPool2d(1),
Lambda(flatten),
nn.Linear(32,data.c)
)
model = get_cnn_model(data)
cbfs = [Recorder, partial(AvgStatsCallback,accuracy)]
opt = optim.SGD(model.parameters(), lr=0.4)
learn = Learner(model, opt, loss_func, data)
run = Runner(cb_funcs=cbfs)
%time run.fit(1, learn)
###Output
train: [2.0051634375, tensor(0.2985)]
valid: [0.732300439453125, tensor(0.7834)]
CPU times: user 7.31 s, sys: 4.7 s, total: 12 s
Wall time: 4.03 s
###Markdown
CUDA This took a long time to run, so it's time to use a GPU. A simple Callback can make sure the model, inputs and targets are all on the same device.
###Code
# Somewhat more flexible way
device = torch.device('cuda',0)
class CudaCallback(Callback):
def __init__(self,device): self.device=device
def begin_fit(self): self.model.to(device)
def begin_batch(self): self.run.xb,self.run.yb = self.xb.to(device),self.yb.to(device)
# Somewhat less flexible, but quite convenient
torch.cuda.set_device(device)
#export
class CudaCallback(Callback):
def begin_fit(self): self.model.cuda()
def begin_batch(self): self.run.xb,self.run.yb = self.xb.cuda(),self.yb.cuda()
cbfs.append(CudaCallback)
model = get_cnn_model(data)
opt = optim.SGD(model.parameters(), lr=0.4)
learn = Learner(model, opt, loss_func, data)
run = Runner(cb_funcs=cbfs)
%time run.fit(3, learn)
###Output
train: [0.1647791796875, tensor(0.9497, device='cuda:0')]
valid: [0.1328588134765625, tensor(0.9607, device='cuda:0')]
train: [0.137473251953125, tensor(0.9590, device='cuda:0')]
valid: [0.110581884765625, tensor(0.9676, device='cuda:0')]
train: [0.1281933203125, tensor(0.9616, device='cuda:0')]
valid: [0.09979959106445313, tensor(0.9719, device='cuda:0')]
CPU times: user 2.77 s, sys: 411 ms, total: 3.18 s
Wall time: 3.18 s
###Markdown
Refactor model First we can regroup all the conv/relu in a single function:
###Code
def conv2d(ni, nf, ks=3, stride=2):
return nn.Sequential(
nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride), nn.ReLU())
###Output
_____no_output_____
###Markdown
Another thing is that we can add the channel dimension in a batch transform, that we can do with a Callback.
###Code
#export
class BatchTransformXCallback(Callback):
_order=2
def __init__(self, tfm): self.tfm = tfm
def begin_batch(self): self.run.xb = self.tfm(self.xb)
def view_tfm(*size):
def _inner(x): return x.view(*((-1,)+size))
return _inner
mnist_view = view_tfm(1,28,28)
cbfs.append(partial(BatchTransformXCallback, mnist_view))
###Output
_____no_output_____
###Markdown
With the `AdaptiveAvgPool`, this model can now work on any size input:
###Code
nfs = [8,16,32,32]
def get_cnn_layers(data, nfs):
nfs = [1] + nfs
return [
conv2d(nfs[i], nfs[i+1], 5 if i==0 else 3)
for i in range(len(nfs)-1)
] + [nn.AdaptiveAvgPool2d(1), Lambda(flatten), nn.Linear(nfs[-1], data.c)]
def get_cnn_model(data, nfs): return nn.Sequential(*get_cnn_layers(data, nfs))
#export
def get_runner(model, data, lr=0.6, cbs=None, opt_func=None, loss_func = F.cross_entropy):
if opt_func is None: opt_func = optim.SGD
opt = opt_func(model.parameters(), lr=lr)
learn = Learner(model, opt, loss_func, data)
return learn, Runner(cb_funcs=listify(cbs))
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, data, lr=0.4, cbs=cbfs)
run.fit(3, learn)
###Output
train: [1.7564903125, tensor(0.3862, device='cuda:0')]
valid: [0.773130908203125, tensor(0.7468, device='cuda:0')]
train: [0.33649609375, tensor(0.8952, device='cuda:0')]
valid: [0.2017922607421875, tensor(0.9387, device='cuda:0')]
train: [0.16189546875, tensor(0.9507, device='cuda:0')]
valid: [0.15152830810546875, tensor(0.9550, device='cuda:0')]
###Markdown
Hooks Hooks are `PyTorch` object you can add to any `nn.Module`. They will be called when this particular is executed during the forward pass (forward hook) or the backward pass (backward hook).NB: Hooks won't work as is if you're using multi-GPU training. Manual insertion Let's say we want to do some telemetry, and want the mean and standard deviation of each activations in the model. First we can do it manually like this:
###Code
class SequentialModel(nn.Module):
def __init__(self, *layers):
super().__init__()
self.layers = nn.ModuleList(layers)
self.act_means = [[] for _ in layers]
self.act_stds = [[] for _ in layers]
def __call__(self, x):
for i,l in enumerate(self.layers):
x = l(x)
self.act_means[i].append(x.data.mean())
self.act_stds [i].append(x.data.std ())
return x
def __iter__(self): return iter(self.layers)
model = SequentialModel(*get_cnn_layers(data, nfs))
learn,run = get_runner(model, data, lr=0.9, cbs=cbfs)
run.fit(2, learn)
for l in model.act_means: plt.plot(l)
plt.legend(range(6));
for l in model.act_stds: plt.plot(l)
plt.legend(range(6));
for l in model.act_means: plt.plot(l[:10])
plt.legend(range(6));
for l in model.act_stds: plt.plot(l[:10])
plt.legend(range(6));
###Output
_____no_output_____
###Markdown
Pytorch hooks But hooks don't require us to rewrite the model (if we're using a pretrained model for instance).
###Code
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, data, lr=0.5, cbs=cbfs)
act_means = [[] for _ in model]
act_stds = [[] for _ in model]
###Output
_____no_output_____
###Markdown
A hook is attached to a layer, and needs to have a function that takes three argument (module, input, output). Here we store the mean and std of the output in the correct position of our list.
###Code
def append_stats(i, mod, inp, outp):
act_means[i].append(outp.data.mean())
act_stds [i].append(outp.data.std())
for i,m in enumerate(model): m.register_forward_hook(partial(append_stats, i))
run.fit(1, learn)
for o in act_means: plt.plot(o)
plt.legend(range(5));
###Output
_____no_output_____
###Markdown
Hook class We can refactor this in a Hook class. It's very important to remove the hooks when they are deleted, otherwise there will be references kept and the memory won't be properly released when your model is deleted.
###Code
#export
def children(m): return list(m.children())
class Hook():
def __init__(self, m, f): self.hook = m.register_forward_hook(partial(f, self))
def remove(self): self.hook.remove()
def __del__(self): self.remove()
def append_stats(hook, mod, inp, outp):
if not hasattr(hook,'stats'): hook.stats = ([],[])
means,stds = hook.stats
means.append(outp.data.mean())
stds .append(outp.data.std())
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, data, lr=0.5, cbs=cbfs)
hooks = [Hook(l, append_stats) for l in children(model[:4])]
run.fit(1, learn)
for h in hooks:
plt.plot(h.stats[0])
h.remove()
plt.legend(range(4));
###Output
_____no_output_____
###Markdown
A Hooks class Let's design our own class that will can contain a list of objects. It behaves a bit like a numpy array in the sense that we can index it via:- a single index- a slice (like 1:5)- a list of indices- a mask of indices (`[True,False,False,True,...]`)The `__iter__` method is there to be able to do things like `for x in ...`.
###Code
#export
class ListContainer():
def __init__(self, items): self.items = listify(items)
def __getitem__(self, idx):
if isinstance(idx, (int,slice)): return self.items[idx]
if isinstance(idx[0],bool):
assert len(idx)==len(self) # bool mask
return [o for m,o in zip(idx,self.items) if m]
return [self.items[i] for i in idx]
def __len__(self): return len(self.items)
def __iter__(self): return iter(self.items)
def __setitem__(self, i, o): self.items[i] = o
def __delitem__(self, i): del(self.items[i])
def __repr__(self):
res = f'{self.__class__.__name__} ({len(self)} items)\n{self.items[:10]}'
if len(self)>10: res = res[:-1]+ '...]'
return res
ListContainer(range(10))
ListContainer(range(100))
t = ListContainer(range(10))
t[[1,2]], t[[False]*8 + [True,False]]
###Output
_____no_output_____
###Markdown
We can use it to write a `Hooks` class that contains several hooks.
###Code
#export
from torch.nn import init
class Hooks(ListContainer):
def __init__(self, ms, f): super().__init__([Hook(m, f) for m in ms])
def __delitem__(self, i):
self[i].remove()
super().__delitem__(i)
def __enter__(self, *args): return self
def __exit__ (self, *args): self.remove()
def remove(self):
for h in self: h.remove()
model = get_cnn_model(data, nfs).cuda()
learn,run = get_runner(model, data, lr=0.9, cbs=cbfs)
hooks = Hooks(model, append_stats)
hooks
hooks.remove()
x,y = next(iter(data.train_dl))
x = mnist_resize(x).cuda()
x.mean(),x.std()
p = model[0](x)
p.mean(),p.std()
for l in model:
if isinstance(l, nn.Sequential):
init.kaiming_normal_(l[0].weight)
l[0].bias.data.zero_()
p = model[0](x)
p.mean(),p.std()
###Output
_____no_output_____
###Markdown
Having given an `__enter__` and `__exit__` method to our `Hooks` class, we can use it as a context manager. This makes sure that onces we are out of the `with` block, all the hooks have been removed and aren't there to pollute our memory.
###Code
with Hooks(model, append_stats) as hooks:
run.fit(2, learn)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss = h.stats
ax0.plot(ms[:10])
ax1.plot(ss[:10])
plt.legend(range(5));
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss = h.stats
ax0.plot(ms)
ax1.plot(ss)
plt.legend(range(5));
###Output
train: [1.48118421875, tensor(0.5069, device='cuda:0')]
valid: [0.75658740234375, tensor(0.7937, device='cuda:0')]
train: [0.25740671875, tensor(0.9225, device='cuda:0')]
valid: [0.16629852294921876, tensor(0.9504, device='cuda:0')]
###Markdown
Other statistics - pct < x- percentiles Generalized ReLU Nos let's use our model with a generalize ReLU that can be shifted and with maximum value.
###Code
#export
def get_cnn_layers(data, nfs, layer, **kwargs):
nfs = [1] + nfs
return [layer(nfs[i], nfs[i+1], 5 if i==0 else 3, **kwargs)
for i in range(len(nfs)-1)] + [
nn.AdaptiveAvgPool2d(1), Lambda(flatten), nn.Linear(nfs[-1], data.c)]
def conv_layer(ni, nf, ks=3, stride=2, **kwargs):
return nn.Sequential(
nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride), GeneralRelu(**kwargs))
class GeneralRelu(nn.Module):
def __init__(self, leak=None, sub=None, maxv=None):
super().__init__()
self.leak,self.sub,self.maxv = leak,sub,maxv
def forward(self, x):
x = F.leaky_relu(x,self.leak) if self.leak is not None else F.relu(x)
if self.sub is not None: x.sub_(self.sub)
if self.maxv is not None: x.clamp_max_(self.maxv)
return x
def init_cnn(m, uniform=False):
f = init.kaiming_uniform_ if uniform else init.kaiming_normal_
for l in m:
if isinstance(l, nn.Sequential):
f(l[0].weight, a=0.1)
l[0].bias.data.zero_()
def get_cnn_model(data, nfs, layer, **kwargs):
return nn.Sequential(*get_cnn_layers(data, nfs, layer, **kwargs))
model = get_cnn_model(data, nfs, conv_layer, leak=0.1, sub=0.4, maxv=6.)
init_cnn(model)
learn,run = get_runner(model, data, lr=0.9, cbs=cbfs)
with Hooks(model, append_stats) as hooks:
run.fit(2, learn)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss = h.stats
ax0.plot(ms[:10])
ax1.plot(ss[:10])
h.remove()
plt.legend(range(5));
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss = h.stats
ax0.plot(ms)
ax1.plot(ss)
plt.legend(range(5));
#export
def get_learn_run(nfs, data, lr, layer, cbs=None, opt_func=None, uniform=False, **kwargs):
model = get_cnn_model(data, nfs, layer, **kwargs)
init_cnn(model, uniform=uniform)
return get_runner(model, data, lr=lr, cbs=cbs, opt_func=opt_func)
sched = combine_scheds([0.5, 0.5], [sched_cos(0.2, 1.), sched_cos(1., 0.1)])
learn,run = get_learn_run(nfs, data, 1., conv_layer, cbs=cbfs+[partial(ParamScheduler,'lr', sched)])
run.fit(8, learn)
###Output
train: [1.100308125, tensor(0.6448, device='cuda:0')]
valid: [0.357441064453125, tensor(0.8959, device='cuda:0')]
train: [0.346201171875, tensor(0.8934, device='cuda:0')]
valid: [1.25831875, tensor(0.7231, device='cuda:0')]
train: [0.362585078125, tensor(0.8933, device='cuda:0')]
valid: [0.13587474365234375, tensor(0.9611, device='cuda:0')]
train: [0.21276736328125, tensor(0.9364, device='cuda:0')]
valid: [0.1316833740234375, tensor(0.9627, device='cuda:0')]
train: [0.092967685546875, tensor(0.9713, device='cuda:0')]
valid: [0.0739874755859375, tensor(0.9782, device='cuda:0')]
train: [0.0610211962890625, tensor(0.9807, device='cuda:0')]
valid: [0.07001009521484375, tensor(0.9796, device='cuda:0')]
train: [0.0464786962890625, tensor(0.9856, device='cuda:0')]
valid: [0.06202723388671875, tensor(0.9826, device='cuda:0')]
train: [0.03790337158203125, tensor(0.9889, device='cuda:0')]
valid: [0.05997108764648437, tensor(0.9831, device='cuda:0')]
###Markdown
Uniform init may provide more useful initial weights.
###Code
learn,run = get_learn_run(nfs, data, 1., conv_layer, uniform=True,
cbs=cbfs+[partial(ParamScheduler,'lr', sched)])
run.fit(8, learn)
###Output
train: [1.3703465625, tensor(0.5662, device='cuda:0')]
valid: [0.32617451171875, tensor(0.9049, device='cuda:0')]
train: [0.3398132421875, tensor(0.8999, device='cuda:0')]
valid: [0.18495489501953125, tensor(0.9453, device='cuda:0')]
train: [0.209861171875, tensor(0.9386, device='cuda:0')]
valid: [0.1167170166015625, tensor(0.9672, device='cuda:0')]
train: [0.115620771484375, tensor(0.9648, device='cuda:0')]
valid: [0.08911015625, tensor(0.9735, device='cuda:0')]
train: [0.0907637109375, tensor(0.9725, device='cuda:0')]
valid: [0.08125319213867188, tensor(0.9766, device='cuda:0')]
train: [0.0563795166015625, tensor(0.9819, device='cuda:0')]
valid: [0.0629861083984375, tensor(0.9823, device='cuda:0')]
train: [0.039874599609375, tensor(0.9879, device='cuda:0')]
valid: [0.0616022216796875, tensor(0.9841, device='cuda:0')]
train: [0.03255824462890625, tensor(0.9902, device='cuda:0')]
valid: [0.05708772583007812, tensor(0.9846, device='cuda:0')]
###Markdown
Export
###Code
!python notebook2script.py 06_cuda_cnn_hooks_init.ipynb
###Output
Converted 06_cuda_cnn_hooks_init.ipynb to nb_06.py
###Markdown
ConvNet
###Code
x_train,y_train,x_valid,y_valid = get_data()
#export
def normalize_to(train, valid):
m,s = train.mean(),train.std()
return normalize(train, m, s), normalize(valid, m, s)
x_train,x_valid = normalize_to(x_train,x_valid)
train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)
x_train.mean(),x_train.std()
nh,bs = 50,512
loss_func = F.cross_entropy
data = DataBunch(*get_dls(train_ds, valid_ds, bs))
#export
class Lambda(nn.Module):
def __init__(self, func):
super().__init__()
self.func = func
def forward(self, x): return self.func(x)
def flatten(x): return x.view(x.shape[0], -1)
def mnist_resize(x): return x.view(-1, 1, 28, 28)
def get_cnn_model(data):
return nn.Sequential(
Lambda(mnist_resize),
nn.Conv2d( 1, 8, 5, padding=2,stride=2), nn.ReLU(), #14
nn.Conv2d( 8,16, 3, padding=1,stride=2), nn.ReLU(), # 7
nn.Conv2d(16,16, 3, padding=1,stride=2), nn.ReLU(), # 4
nn.AdaptiveAvgPool2d(1),
Lambda(flatten),
nn.Linear(16,data.c)
)
model = get_cnn_model(data)
opt = optim.SGD(model.parameters(), lr=0.4)
learn = Learner(model, opt, loss_func, data)
run = Runner(AvgStatsCallback(accuracy))
%time run.fit(1, learn)
###Output
train: [2.20186, tensor(0.1866)]
valid: [1.8516697265625, tensor(0.3363)]
CPU times: user 6.11 s, sys: 4.09 s, total: 10.2 s
Wall time: 3.42 s
###Markdown
CUDA
###Code
#export
class CudaCallback(Callback):
def begin_fit(self, run): run.model.cuda()
def begin_batch(self, run): run.xb,run.yb = run.xb.cuda(),run.yb.cuda()
model = get_cnn_model(data)
opt = optim.SGD(model.parameters(), lr=0.4)
learn = Learner(model, opt, loss_func, data)
run = Runner([AvgStatsCallback(accuracy), CudaCallback()])
%time run.fit(3, learn)
###Output
train: [2.250249375, tensor(0.1483, device='cuda:0')]
valid: [1.8449322265625, tensor(0.3548, device='cuda:0')]
train: [1.68084390625, tensor(0.4144, device='cuda:0')]
valid: [1.570175390625, tensor(0.4762, device='cuda:0')]
train: [1.044627421875, tensor(0.6583, device='cuda:0')]
valid: [0.636278662109375, tensor(0.7889, device='cuda:0')]
CPU times: user 3.53 s, sys: 806 ms, total: 4.34 s
Wall time: 4.25 s
###Markdown
Refactor model
###Code
def conv2d(ni, nf, ks=3, stride=2):
return nn.Sequential(
nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride), nn.ReLU())
#export
class BatchTransformXCallback(Callback):
_order=2
def __init__(self, tfm): self.tfm = tfm
def begin_batch(self, run): run.xb = self.tfm(run.xb)
def resize_tfm(*size):
def _inner(x): return x.view(*((-1,)+size))
return _inner
###Output
_____no_output_____
###Markdown
This model can now work on any size input:
###Code
def get_cnn_layers(data, nfs):
nfs = [1] + nfs
return [
conv2d(nfs[i], nfs[i+1], 5 if i==1 else 3)
for i in range(len(nfs)-1)
] + [nn.AdaptiveAvgPool2d(1), Lambda(flatten), nn.Linear(nfs[-1], data.c)]
def get_cnn_model(data, nfs): return nn.Sequential(*get_cnn_layers(data, nfs))
def get_runner(model, lr=0.6, cbs=None, loss_func = F.cross_entropy):
opt = optim.SGD(model.parameters(), lr=lr)
learn = Learner(model, opt, loss_func, data)
return learn, Runner([AvgStatsCallback([accuracy]), CudaCallback(),
BatchTransformXCallback(resize_tfm(1,28,28))] + listify(cbs))
nfs = [8,16,32,32]
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, lr=0.5)
run.fit(3, learn)
###Output
train: [2.1264459375, tensor(0.2616, device='cuda:0')]
valid: [1.581421484375, tensor(0.4090, device='cuda:0')]
train: [0.675396484375, tensor(0.7758, device='cuda:0')]
valid: [0.228989208984375, tensor(0.9329, device='cuda:0')]
train: [0.214808203125, tensor(0.9346, device='cuda:0')]
valid: [0.149219140625, tensor(0.9553, device='cuda:0')]
###Markdown
Hooks Manual insertion
###Code
class SequentialModel(nn.Module):
def __init__(self, *layers):
super().__init__()
self.layers = nn.ModuleList(layers)
self.act_means = [[] for _ in layers]
self.act_stds = [[] for _ in layers]
def __call__(self, x):
for i,l in enumerate(self.layers):
x = l(x)
self.act_means[i].append(x.mean())
self.act_stds [i].append(x.std ())
return x
def __iter__(self): return iter(self.layers)
model = SequentialModel(*get_cnn_layers(data, nfs))
learn,run = get_runner(model, lr=0.5)
run.fit(2, learn)
for l in model.act_means: plt.plot(l)
plt.legend(range(5));
for l in model.act_stds: plt.plot(l)
plt.legend(range(5));
for l in model.act_means: plt.plot(l[:10])
plt.legend(range(5));
for l in model.act_stds: plt.plot(l[:10])
plt.legend(range(5));
###Output
_____no_output_____
###Markdown
Pytorch hooks
###Code
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, lr=0.5)
act_means = [[] for _ in model]
act_stds = [[] for _ in model]
def append_stats(i, mod, inp, outp):
act_means[i].append(outp.mean())
act_stds [i].append(outp.std())
for i,m in enumerate(model): m.register_forward_hook(partial(append_stats, i))
run.fit(1, learn)
for o in act_means: plt.plot(o)
plt.legend(range(5));
###Output
_____no_output_____
###Markdown
Hook class
###Code
#export
def children(m): return list(m.children())
class Hook():
def __init__(self, m, f):
self.means = []
self.stds = []
self.hook = m.register_forward_hook(partial(f, self))
def remove(self): self.hook.remove()
def __del__(self): self.remove()
def append_stats(hook, mod, inp, outp):
hook.means.append(outp.mean())
hook.stds .append(outp.std())
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, lr=0.5)
hooks = [Hook(l, append_stats) for l in children(model[:4])]
run.fit(1, learn)
for h in hooks:
plt.plot(h.means)
h.remove()
plt.legend(range(4));
###Output
_____no_output_____
###Markdown
A Hooks class
###Code
#export
from torch.nn import init
class Hooks():
def __init__(self, ms, f): self.hooks = [Hook(m, f) for m in ms]
def __getitem__(self,i): return self.hooks[i]
def __len__(self): return len(self.hooks)
def __iter__(self): return iter(self.hooks)
def __enter__(self, *args): return self
def __exit__ (self, *args): self.remove()
def remove(self):
for h in self.hooks: h.remove()
model = get_cnn_model(data, nfs).cuda()
learn,run = get_runner(model, lr=0.5)
x,y = next(iter(data.valid_dl))
x = mnist_resize(x).cuda()
x.mean(),x.std()
p = model[0](x)
p.mean(),p.std()
for l in model:
if isinstance(l, nn.Sequential): init.kaiming_normal_(l[0].weight)
p = model[0](x)
p.mean(),p.std()
with Hooks(model, append_stats) as hooks:
run.fit(1, learn)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ax0.plot(h.means[:10])
ax1.plot(h.stds[:10])
h.remove()
plt.legend(range(5));
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ax0.plot(h.means)
ax1.plot(h.stds)
plt.legend(range(5));
###Output
train: [1.49044203125, tensor(0.4927, device='cuda:0')]
valid: [0.6361658203125, tensor(0.8141, device='cuda:0')]
###Markdown
Generalized ReLU
###Code
#export
def get_cnn_layers(data, nfs, **kwargs):
nfs = [1] + nfs
return [conv2d(nfs[i], nfs[i+1], **kwargs)
for i in range(len(nfs)-1)] + [
nn.AdaptiveAvgPool2d(1), Lambda(flatten), nn.Linear(nfs[-1], data.c)]
def get_cnn_model(data, nfs, **kwargs): return nn.Sequential(*get_cnn_layers(data, nfs, **kwargs))
def conv2d(ni, nf, ks=3, stride=2, **kwargs):
return nn.Sequential(
nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride), GeneralRelu(**kwargs))
class GeneralRelu(nn.Module):
def __init__(self, leak=None, sub=None, maxv=None):
super().__init__()
self.leak,self.sub,self.maxv = leak,sub,maxv
def forward(self, x):
x = F.leaky_relu(x,self.leak) if self.leak is not None else F.relu(x)
if self.sub is not None: x.sub_(self.sub)
if self.maxv is not None: x.clamp_max_(self.maxv)
return x
model = SequentialModel(*get_cnn_layers(data, nfs, leak=0.1, sub=0.4, maxv=6.))
for l in model:
if isinstance(l, nn.Sequential): init.kaiming_normal_(l[0].weight, a=0.1)
learn,run = get_runner(model, lr=0.1)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
with Hooks(model, append_stats) as hooks:
run.fit(1, learn)
for h in hooks:
ax0.plot(h.means[:10])
ax1.plot(h.stds[:10])
h.remove()
ax0.legend(range(6));
def init_cnn(m):
for l in m:
if isinstance(l, nn.Sequential):
init.kaiming_normal_(l[0].weight, a=0.1)
l[0].weight.data.mul_(1.1)
model = SequentialModel(*get_cnn_layers(data, nfs, leak=0.1, sub=0.4, maxv=6.))
init_cnn(model)
learn,run = get_runner(model, lr=0.1)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
with Hooks(model, append_stats) as hooks:
run.fit(1, learn)
for h in hooks:
ax0.plot(h.means[:10])
ax1.plot(h.stds[:10])
h.remove()
ax0.legend(range(6));
def get_learn_run(nfs, lr, cbs=None):
model = SequentialModel(*get_cnn_layers(data, nfs, leak=0.1, sub=0.4, maxv=6.))
init_cnn(model)
return get_runner(model, lr=lr, cbs=cbs)
sched = combine_scheds([0.3, 0.7], [sched_lin(0.2, 1.), sched_lin(1., 0.1)])
learn,run = get_learn_run([8, 16, 32, 64], 1., cbs=ParamScheduler('lr', sched))
run.fit(8, learn)
###Output
train: [0.788958828125, tensor(0.7592, device='cuda:0')]
valid: [0.2636699951171875, tensor(0.9237, device='cuda:0')]
train: [0.22712306640625, tensor(0.9305, device='cuda:0')]
valid: [0.1691684814453125, tensor(0.9485, device='cuda:0')]
train: [0.16426267578125, tensor(0.9499, device='cuda:0')]
valid: [0.10736746826171875, tensor(0.9679, device='cuda:0')]
train: [0.09078033203125, tensor(0.9729, device='cuda:0')]
valid: [0.08951355590820312, tensor(0.9733, device='cuda:0')]
train: [0.0713287744140625, tensor(0.9786, device='cuda:0')]
valid: [0.0769127197265625, tensor(0.9777, device='cuda:0')]
train: [0.0581701171875, tensor(0.9830, device='cuda:0')]
valid: [0.072382470703125, tensor(0.9786, device='cuda:0')]
train: [0.050179658203125, tensor(0.9855, device='cuda:0')]
valid: [0.06867281494140624, tensor(0.9796, device='cuda:0')]
train: [0.0447371826171875, tensor(0.9883, device='cuda:0')]
valid: [0.0665090576171875, tensor(0.9799, device='cuda:0')]
###Markdown
Export
###Code
!python notebook2script.py 06_cuda_cnn_hooks_init.ipynb
###Output
Converted 06_cuda_cnn_hooks_init.ipynb to nb_06.py
###Markdown
ConvNet
###Code
x_train,y_train,x_valid,y_valid = get_data()
###Output
_____no_output_____
###Markdown
Helper function to quickly normalize with the mean and standard deviation from our training set:
###Code
#export
def normalize_to(train, valid):
m,s = train.mean(),train.std()
return normalize(train, m, s), normalize(valid, m, s)
x_train,x_valid = normalize_to(x_train,x_valid)
train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)
###Output
_____no_output_____
###Markdown
Let's check it behaved properly.
###Code
x_train.mean(),x_train.std()
nh,bs = 50,512
c = y_train.max().item()+1
loss_func = F.cross_entropy
data = DataBunch(*get_dls(train_ds, valid_ds, bs), c)
###Output
_____no_output_____
###Markdown
To refactor layers, it's useful to have a `Lambda` layer that can take a basic function and convert it to a layer you can put in `nn.Sequential`.NB: if you use a Lambda layer with a lambda function, your model won't pickle so you won't be able to save it with PyTorch. So it's best to give a name to the function you're using inside your Lambda (like flatten below).
###Code
#export
class Lambda(nn.Module):
def __init__(self, func):
super().__init__()
self.func = func
def forward(self, x): return self.func(x)
def flatten(x): return x.view(x.shape[0], -1)
###Output
_____no_output_____
###Markdown
This one takes the flat vector of size `bs x 784` and puts it back as a batch of images of 28 by 28 pixels:
###Code
def mnist_resize(x): return x.view(-1, 1, 28, 28)
###Output
_____no_output_____
###Markdown
We can now define a simple CNN.
###Code
def get_cnn_model(data):
return nn.Sequential(
Lambda(mnist_resize),
nn.Conv2d( 1, 8, 5, padding=2,stride=2), nn.ReLU(), #14
nn.Conv2d( 8,16, 3, padding=1,stride=2), nn.ReLU(), # 7
nn.Conv2d(16,32, 3, padding=1,stride=2), nn.ReLU(), # 4
nn.Conv2d(32,32, 3, padding=1,stride=2), nn.ReLU(), # 2
nn.AdaptiveAvgPool2d(1),
Lambda(flatten),
nn.Linear(32,data.c)
)
model = get_cnn_model(data)
###Output
_____no_output_____
###Markdown
Basic callbacks from the previous notebook:
###Code
cbfs = [Recorder, partial(AvgStatsCallback,accuracy)]
opt = optim.SGD(model.parameters(), lr=0.4)
learn = Learner(model, opt, loss_func, data)
run = Runner(cb_funcs=cbfs)
%time run.fit(1, learn)
###Output
train: [2.21820171875, tensor(0.1936)]
valid: [1.403558203125, tensor(0.5666)]
CPU times: user 7.07 s, sys: 124 ms, total: 7.2 s
Wall time: 2.62 s
###Markdown
CUDA This took a long time to run, so it's time to use a GPU. A simple Callback can make sure the model, inputs and targets are all on the same device.
###Code
# Somewhat more flexible way
device = torch.device('cuda',0)
class CudaCallback(Callback):
def __init__(self,device): self.device=device
def begin_fit(self): self.model.to(device)
def begin_batch(self): self.run.xb,self.run.yb = self.xb.to(device),self.yb.to(device)
# Somewhat less flexible, but quite convenient
torch.cuda.set_device(device)
#export
class CudaCallback(Callback):
def begin_fit(self): self.model.cuda()
def begin_batch(self): self.run.xb,self.run.yb = self.xb.cuda(),self.yb.cuda()
cbfs.append(CudaCallback)
model = get_cnn_model(data)
opt = optim.SGD(model.parameters(), lr=0.4)
learn = Learner(model, opt, loss_func, data)
run = Runner(cb_funcs=cbfs)
%time run.fit(3, learn)
###Output
train: [2.103035625, tensor(0.2337, device='cuda:0')]
valid: [1.59496875, tensor(0.4684, device='cuda:0')]
train: [0.476135546875, tensor(0.8502, device='cuda:0')]
valid: [0.1915430419921875, tensor(0.9424, device='cuda:0')]
train: [0.1780847265625, tensor(0.9459, device='cuda:0')]
valid: [0.1391046875, tensor(0.9599, device='cuda:0')]
CPU times: user 2.95 s, sys: 408 ms, total: 3.36 s
Wall time: 3.38 s
###Markdown
Now, that's definitely faster! Refactor model First we can regroup all the conv/relu in a single function:
###Code
def conv2d(ni, nf, ks=3, stride=2):
return nn.Sequential(
nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride), nn.ReLU())
###Output
_____no_output_____
###Markdown
Another thing is that we can do the mnist resize in a batch transform, that we can do with a Callback.
###Code
#export
class BatchTransformXCallback(Callback):
_order=2
def __init__(self, tfm): self.tfm = tfm
def begin_batch(self): self.run.xb = self.tfm(self.xb)
def view_tfm(*size):
def _inner(x): return x.view(*((-1,)+size))
return _inner
mnist_view = view_tfm(1,28,28)
cbfs.append(partial(BatchTransformXCallback, mnist_view))
###Output
_____no_output_____
###Markdown
With the `AdaptiveAvgPool`, this model can now work on any size input:
###Code
nfs = [8,16,32,32]
def get_cnn_layers(data, nfs):
nfs = [1] + nfs
return [
conv2d(nfs[i], nfs[i+1], 5 if i==0 else 3)
for i in range(len(nfs)-1)
] + [nn.AdaptiveAvgPool2d(1), Lambda(flatten), nn.Linear(nfs[-1], data.c)]
def get_cnn_model(data, nfs): return nn.Sequential(*get_cnn_layers(data, nfs))
###Output
_____no_output_____
###Markdown
And this helper function will quickly give us everything needed to run the training.
###Code
#export
def get_runner(model, data, lr=0.6, cbs=None, opt_func=None, loss_func = F.cross_entropy):
if opt_func is None: opt_func = optim.SGD
opt = opt_func(model.parameters(), lr=lr)
learn = Learner(model, opt, loss_func, data)
return learn, Runner(cb_funcs=listify(cbs))
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, data, lr=0.4, cbs=cbfs)
model
run.fit(3, learn)
###Output
train: [1.81677875, tensor(0.3958, device='cuda:0')]
valid: [0.689938623046875, tensor(0.7734, device='cuda:0')]
train: [0.40679828125, tensor(0.8776, device='cuda:0')]
valid: [0.2156467529296875, tensor(0.9328, device='cuda:0')]
train: [0.2014695703125, tensor(0.9398, device='cuda:0')]
valid: [0.221406982421875, tensor(0.9305, device='cuda:0')]
###Markdown
Hooks Manual insertion Let's say we want to do some telemetry, and want the mean and standard deviation of each activations in the model. First we can do it manually like this:
###Code
class SequentialModel(nn.Module):
def __init__(self, *layers):
super().__init__()
self.layers = nn.ModuleList(layers)
self.act_means = [[] for _ in layers]
self.act_stds = [[] for _ in layers]
def __call__(self, x):
for i,l in enumerate(self.layers):
x = l(x)
if self.training:
self.act_means[i].append(x.data.mean())
self.act_stds [i].append(x.data.std ())
return x
def __iter__(self): return iter(self.layers)
model = SequentialModel(*get_cnn_layers(data, nfs))
learn,run = get_runner(model, data, lr=0.9, cbs=cbfs)
run.fit(2, learn)
###Output
train: [2.12532046875, tensor(0.2432, device='cuda:0')]
valid: [1.35027255859375, tensor(0.5570, device='cuda:0')]
train: [0.51112546875, tensor(0.8328, device='cuda:0')]
valid: [0.2070518310546875, tensor(0.9335, device='cuda:0')]
###Markdown
Now we can have a look at the means and stds of the activations at the beginning of training.
###Code
for l in model.act_means: plt.plot(l)
plt.legend(range(6));
for l in model.act_stds: plt.plot(l)
plt.legend(range(6));
for l in model.act_means: plt.plot(l[:10])
plt.legend(range(6));
for l in model.act_stds: plt.plot(l[:10])
plt.legend(range(6));
###Output
_____no_output_____
###Markdown
Pytorch hooks Hooks are PyTorch object you can add to any nn.Module. A hook will be called when a layer, it is registered to, is executed during the forward pass (forward hook) or the backward pass (backward hook).Hooks don't require us to rewrite the model.
###Code
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, data, lr=0.5, cbs=cbfs)
act_means = [[] for _ in model]
act_stds = [[] for _ in model]
###Output
_____no_output_____
###Markdown
A hook is attached to a layer, and needs to have a function that takes three arguments: module, input, output. Here we store the mean and std of the output in the correct position of our list.
###Code
def append_stats(i, mod, inp, outp):
if mod.training:
act_means[i].append(outp.data.mean())
act_stds [i].append(outp.data.std())
for i,m in enumerate(model): m.register_forward_hook(partial(append_stats, i))
run.fit(1, learn)
for o in act_means: plt.plot(o)
plt.legend(range(5));
###Output
_____no_output_____
###Markdown
Hook class We can refactor this in a Hook class. It's very important to remove the hooks when they are deleted, otherwise there will be references kept and the memory won't be properly released when your model is deleted.
###Code
#export
def children(m): return list(m.children())
class Hook():
def __init__(self, m, f): self.hook = m.register_forward_hook(partial(f, self))
def remove(self): self.hook.remove()
def __del__(self): self.remove()
def append_stats(hook, mod, inp, outp):
if not hasattr(hook,'stats'): hook.stats = ([],[])
means,stds = hook.stats
if mod.training:
means.append(outp.data.mean())
stds .append(outp.data.std())
###Output
_____no_output_____
###Markdown
NB: In fastai we use a `bool` param to choose whether to make it a forward or backward hook. In the above version we're only supporting forward hooks.
###Code
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, data, lr=0.5, cbs=cbfs)
hooks = [Hook(l, append_stats) for l in children(model[:4])]
run.fit(1, learn)
for h in hooks:
plt.plot(h.stats[0])
h.remove()
plt.legend(range(4));
###Output
_____no_output_____
###Markdown
A Hooks class Let's design our own class that can contain a list of objects. It will behave a bit like a numpy array in the sense that we can index into it via:- a single index- a slice (like 1:5)- a list of indices- a mask of indices (`[True,False,False,True,...]`)The `__iter__` method is there to be able to do things like `for x in ...`.
###Code
#export
class ListContainer():
def __init__(self, items): self.items = listify(items)
def __getitem__(self, idx):
try: return self.items[idx]
except TypeError:
if isinstance(idx[0],bool):
assert len(idx)==len(self) # bool mask
return [o for m,o in zip(idx,self.items) if m]
return [self.items[i] for i in idx]
def __len__(self): return len(self.items)
def __iter__(self): return iter(self.items)
def __setitem__(self, i, o): self.items[i] = o
def __delitem__(self, i): del(self.items[i])
def __repr__(self):
res = f'{self.__class__.__name__} ({len(self)} items)\n{self.items[:10]}'
if len(self)>10: res = res[:-1]+ '...]'
return res
ListContainer(range(10))
ListContainer(range(100))
t = ListContainer(range(10))
t[[1,2]], t[[False]*8 + [True,False]]
t[tensor(3)]
###Output
_____no_output_____
###Markdown
We can use it to write a `Hooks` class that contains several hooks. We will also use it in the next notebook as a container for our objects in the data block API.
###Code
#export
from torch.nn import init
class Hooks(ListContainer):
def __init__(self, ms, f): super().__init__([Hook(m, f) for m in ms])
def __enter__(self, *args): return self
def __exit__ (self, *args): self.remove()
def __del__(self): self.remove()
def __delitem__(self, i):
self[i].remove()
super().__delitem__(i)
def remove(self):
for h in self: h.remove()
model = get_cnn_model(data, nfs).cuda()
learn,run = get_runner(model, data, lr=0.9, cbs=cbfs)
hooks = Hooks(model, append_stats)
hooks
hooks.remove()
x,y = next(iter(data.train_dl))
x = mnist_resize(x).cuda()
x.mean(),x.std()
p = model[0](x)
p.mean(),p.std()
for l in model:
if isinstance(l, nn.Sequential):
init.kaiming_normal_(l[0].weight)
l[0].bias.data.zero_()
p = model[0](x)
p.mean(),p.std()
###Output
_____no_output_____
###Markdown
Having given an `__enter__` and `__exit__` method to our `Hooks` class, we can use it as a context manager. This makes sure that onces we are out of the `with` block, all the hooks have been removed and aren't there to pollute our memory.
###Code
with Hooks(model, append_stats) as hooks:
run.fit(2, learn)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss = h.stats
ax0.plot(ms[:10])
ax1.plot(ss[:10])
plt.legend(range(6));
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss = h.stats
ax0.plot(ms)
ax1.plot(ss)
plt.legend(range(6));
###Output
train: [1.7958403125, tensor(0.4004, device='cuda:0')]
valid: [0.447680517578125, tensor(0.8629, device='cuda:0')]
train: [0.595609609375, tensor(0.8183, device='cuda:0')]
valid: [0.569426123046875, tensor(0.8380, device='cuda:0')]
###Markdown
Other statistics Let's store more than the means and stds and plot histograms of our activations now.
###Code
def append_stats(hook, mod, inp, outp):
if not hasattr(hook,'stats'): hook.stats = ([],[],[])
means,stds,hists = hook.stats
if mod.training:
means.append(outp.data.mean().cpu())
stds .append(outp.data.std().cpu())
hists.append(outp.data.cpu().histc(40,0,10)) #histc isn't implemented on the GPU
model = get_cnn_model(data, nfs).cuda()
learn,run = get_runner(model, data, lr=0.9, cbs=cbfs)
for l in model:
if isinstance(l, nn.Sequential):
init.kaiming_normal_(l[0].weight)
l[0].bias.data.zero_()
with Hooks(model, append_stats) as hooks: run.fit(1, learn)
# Thanks to @ste for initial version of histgram plotting code
def get_hist(h): return torch.stack(h.stats[2]).t().float().log1p()
fig,axes = plt.subplots(2,2, figsize=(15,6))
for ax,h in zip(axes.flatten(), hooks[:4]):
ax.imshow(get_hist(h), origin='lower')
ax.axis('off')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
From the histograms, we can easily get more informations like the min or max of the activations
###Code
def get_min(h):
h1 = torch.stack(h.stats[2]).t().float()
return h1[:2].sum(0)/h1.sum(0)
fig,axes = plt.subplots(2,2, figsize=(15,6))
for ax,h in zip(axes.flatten(), hooks[:4]):
ax.plot(get_min(h))
ax.set_ylim(0,1)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Generalized ReLU Now let's use our model with a generalized ReLU that can be shifted and with maximum value.
###Code
#export
def get_cnn_layers(data, nfs, layer, **kwargs):
nfs = [1] + nfs
return [layer(nfs[i], nfs[i+1], 5 if i==0 else 3, **kwargs)
for i in range(len(nfs)-1)] + [
nn.AdaptiveAvgPool2d(1), Lambda(flatten), nn.Linear(nfs[-1], data.c)]
def conv_layer(ni, nf, ks=3, stride=2, **kwargs):
return nn.Sequential(
nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride), GeneralRelu(**kwargs))
class GeneralRelu(nn.Module):
def __init__(self, leak=None, sub=None, maxv=None):
super().__init__()
self.leak,self.sub,self.maxv = leak,sub,maxv
def forward(self, x):
x = F.leaky_relu(x,self.leak) if self.leak is not None else F.relu(x)
if self.sub is not None: x.sub_(self.sub)
if self.maxv is not None: x.clamp_max_(self.maxv)
return x
def init_cnn(m, uniform=False):
f = init.kaiming_uniform_ if uniform else init.kaiming_normal_
for l in m:
if isinstance(l, nn.Sequential):
f(l[0].weight, a=0.1)
l[0].bias.data.zero_()
def get_cnn_model(data, nfs, layer, **kwargs):
return nn.Sequential(*get_cnn_layers(data, nfs, layer, **kwargs))
def append_stats(hook, mod, inp, outp):
if not hasattr(hook,'stats'): hook.stats = ([],[],[])
means,stds,hists = hook.stats
if mod.training:
means.append(outp.data.mean().cpu())
stds .append(outp.data.std().cpu())
hists.append(outp.data.cpu().histc(40,-7,7))
model = get_cnn_model(data, nfs, conv_layer, leak=0.1, sub=0.4, maxv=6.)
init_cnn(model)
learn,run = get_runner(model, data, lr=0.9, cbs=cbfs)
with Hooks(model, append_stats) as hooks:
run.fit(1, learn)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss,hi = h.stats
ax0.plot(ms[:10])
ax1.plot(ss[:10])
plt.legend(range(5));
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss,hi = h.stats
ax0.plot(ms)
ax1.plot(ss)
plt.legend(range(5));
fig,axes = plt.subplots(2,2, figsize=(15,6))
for ax,h in zip(axes.flatten(), hooks[:4]):
ax.imshow(get_hist(h), origin='lower')
ax.axis('off')
plt.tight_layout()
def get_min(h):
h1 = torch.stack(h.stats[2]).t().float()
return h1[19:22].sum(0)/h1.sum(0)
fig,axes = plt.subplots(2,2, figsize=(15,6))
for ax,h in zip(axes.flatten(), hooks[:4]):
ax.plot(get_min(h))
ax.set_ylim(0,1)
plt.tight_layout()
#export
def get_learn_run(nfs, data, lr, layer, cbs=None, opt_func=None, uniform=False, **kwargs):
model = get_cnn_model(data, nfs, layer, **kwargs)
init_cnn(model, uniform=uniform)
return get_runner(model, data, lr=lr, cbs=cbs, opt_func=opt_func)
sched = combine_scheds([0.5, 0.5], [sched_cos(0.2, 1.), sched_cos(1., 0.1)])
learn,run = get_learn_run(nfs, data, 1., conv_layer, cbs=cbfs+[partial(ParamScheduler,'lr', sched)])
run.fit(8, learn)
###Output
train: [1.213843359375, tensor(0.6207, device='cuda:0')]
valid: [0.308865869140625, tensor(0.9118, device='cuda:0')]
train: [0.31337412109375, tensor(0.9044, device='cuda:0')]
valid: [0.18531549072265624, tensor(0.9418, device='cuda:0')]
train: [0.514809609375, tensor(0.8428, device='cuda:0')]
valid: [0.3498984619140625, tensor(0.8917, device='cuda:0')]
train: [0.534424921875, tensor(0.8684, device='cuda:0')]
valid: [2.154244140625, tensor(0.2246, device='cuda:0')]
train: [1.1689215625, tensor(0.6173, device='cuda:0')]
valid: [0.2096001708984375, tensor(0.9354, device='cuda:0')]
train: [0.18139208984375, tensor(0.9438, device='cuda:0')]
valid: [0.14386746826171876, tensor(0.9588, device='cuda:0')]
train: [0.121735712890625, tensor(0.9626, device='cuda:0')]
valid: [0.11306314697265625, tensor(0.9673, device='cuda:0')]
train: [0.10355359375, tensor(0.9679, device='cuda:0')]
valid: [0.10647432861328125, tensor(0.9694, device='cuda:0')]
###Markdown
Uniform init may provide more useful initial weights (normal distribution puts a lot of them at 0).
###Code
learn,run = get_learn_run(nfs, data, 1., conv_layer, uniform=True,
cbs=cbfs+[partial(ParamScheduler,'lr', sched)])
run.fit(8, learn)
###Output
train: [1.041688828125, tensor(0.6662, device='cuda:0')]
valid: [0.446877099609375, tensor(0.8561, device='cuda:0')]
train: [0.3849487890625, tensor(0.8829, device='cuda:0')]
valid: [0.235209033203125, tensor(0.9319, device='cuda:0')]
train: [0.384751484375, tensor(0.8856, device='cuda:0')]
valid: [0.5723826171875, tensor(0.8181, device='cuda:0')]
train: [0.262653671875, tensor(0.9206, device='cuda:0')]
valid: [0.11083226318359375, tensor(0.9657, device='cuda:0')]
train: [0.09154287109375, tensor(0.9716, device='cuda:0')]
valid: [0.08876705322265625, tensor(0.9735, device='cuda:0')]
train: [0.06415849609375, tensor(0.9804, device='cuda:0')]
valid: [0.06935145263671876, tensor(0.9801, device='cuda:0')]
train: [0.0487696728515625, tensor(0.9850, device='cuda:0')]
valid: [0.06632761840820313, tensor(0.9805, device='cuda:0')]
train: [0.04038564697265625, tensor(0.9885, device='cuda:0')]
valid: [0.06507407836914063, tensor(0.9818, device='cuda:0')]
###Markdown
Export Here's a handy way to export our module without needing to update the file name - after we define this, we can just use `nb_auto_export()` in the future (h/t Stas Bekman):
###Code
#export
from IPython.display import display, Javascript
def nb_auto_export():
display(Javascript("""{
const ip = IPython.notebook
if (ip) {
ip.save_notebook()
console.log('a')
const s = `!python notebook2script.py ${ip.notebook_name}`
if (ip.kernel) { ip.kernel.execute(s) }
}
}"""))
nb_auto_export()
###Output
_____no_output_____
###Markdown
ConvNet
###Code
x_train,y_train,x_valid,y_valid = get_data()
###Output
_____no_output_____
###Markdown
Helper function to quickly normalize with the mean and standard deviation from our training set:
###Code
#export
def normalize_to(train, valid):
m,s = train.mean(),train.std()
return normalize(train, m, s), normalize(valid, m, s)
x_train,x_valid = normalize_to(x_train,x_valid)
train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)
###Output
_____no_output_____
###Markdown
Let's check it behaved properly.
###Code
x_train.mean(),x_train.std()
nh,bs = 50,512
c = y_train.max().item()+1
loss_func = F.cross_entropy
data = DataBunch(*get_dls(train_ds, valid_ds, bs), c)
###Output
_____no_output_____
###Markdown
To refactor layers, it's useful to have a `Lambda` layer that can take a basic function and convert it to a layer you can put in `nn.Sequential`.NB: if you use a Lambda layer with a lambda function, your model won't pickle so you won't be able to save it with PyTorch. So it's best to give a name to the function you're using inside your Lambda (like flatten below).
###Code
#export
class Lambda(nn.Module):
def __init__(self, func):
super().__init__()
self.func = func
def forward(self, x): return self.func(x)
def flatten(x): return x.view(x.shape[0], -1)
###Output
_____no_output_____
###Markdown
This one takes the flat vector of size `bs x 784` and puts it back as a batch of images of 28 by 28 pixels:
###Code
def mnist_resize(x): return x.view(-1, 1, 28, 28)
###Output
_____no_output_____
###Markdown
We can now define a simple CNN.
###Code
def get_cnn_model(data):
return nn.Sequential(
Lambda(mnist_resize),
nn.Conv2d( 1, 8, 5, padding=2,stride=2), nn.ReLU(), #14
nn.Conv2d( 8,16, 3, padding=1,stride=2), nn.ReLU(), # 7
nn.Conv2d(16,32, 3, padding=1,stride=2), nn.ReLU(), # 4
nn.Conv2d(32,32, 3, padding=1,stride=2), nn.ReLU(), # 2
nn.AdaptiveAvgPool2d(1),
Lambda(flatten),
nn.Linear(32,data.c)
)
model = get_cnn_model(data)
###Output
_____no_output_____
###Markdown
Basic callbacks from the previous notebook:
###Code
cbfs = [Recorder, partial(AvgStatsCallback,accuracy)]
opt = optim.SGD(model.parameters(), lr=0.4)
learn = Learner(model, opt, loss_func, data)
run = Runner(cb_funcs=cbfs)
%time run.fit(1, learn)
###Output
train: [2.21820171875, tensor(0.1936)]
valid: [1.403558203125, tensor(0.5666)]
CPU times: user 7.07 s, sys: 124 ms, total: 7.2 s
Wall time: 2.62 s
###Markdown
CUDA This took a long time to run, so it's time to use a GPU. A simple Callback can make sure the model, inputs and targets are all on the same device.
###Code
# Somewhat more flexible way
device = torch.device('cuda',0)
class CudaCallback(Callback):
def __init__(self,device): self.device=device
def begin_fit(self): self.model.to(device)
def begin_batch(self): self.run.xb,self.run.yb = self.xb.to(device),self.yb.to(device)
# Somewhat less flexible, but quite convenient
torch.cuda.set_device(device)
#export
class CudaCallback(Callback):
def begin_fit(self): self.model.cuda()
def begin_batch(self): self.run.xb,self.run.yb = self.xb.cuda(),self.yb.cuda()
cbfs.append(CudaCallback)
model = get_cnn_model(data)
opt = optim.SGD(model.parameters(), lr=0.4)
learn = Learner(model, opt, loss_func, data)
run = Runner(cb_funcs=cbfs)
%time run.fit(3, learn)
###Output
train: [2.103035625, tensor(0.2337, device='cuda:0')]
valid: [1.59496875, tensor(0.4684, device='cuda:0')]
train: [0.476135546875, tensor(0.8502, device='cuda:0')]
valid: [0.1915430419921875, tensor(0.9424, device='cuda:0')]
train: [0.1780847265625, tensor(0.9459, device='cuda:0')]
valid: [0.1391046875, tensor(0.9599, device='cuda:0')]
CPU times: user 2.95 s, sys: 408 ms, total: 3.36 s
Wall time: 3.38 s
###Markdown
Now, that's definitely faster! Refactor model First we can regroup all the conv/relu in a single function:
###Code
def conv2d(ni, nf, ks=3, stride=2):
return nn.Sequential(
nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride), nn.ReLU())
###Output
_____no_output_____
###Markdown
Another thing is that we can do the mnist resize in a batch transform, that we can do with a Callback.
###Code
#export
class BatchTransformXCallback(Callback):
_order=2
def __init__(self, tfm): self.tfm = tfm
def begin_batch(self): self.run.xb = self.tfm(self.xb)
def view_tfm(*size):
def _inner(x): return x.view(*((-1,)+size))
return _inner
mnist_view = view_tfm(1,28,28)
cbfs.append(partial(BatchTransformXCallback, mnist_view))
###Output
_____no_output_____
###Markdown
With the `AdaptiveAvgPool`, this model can now work on any size input:
###Code
nfs = [8,16,32,32]
def get_cnn_layers(data, nfs):
nfs = [1] + nfs
return [
conv2d(nfs[i], nfs[i+1], 5 if i==0 else 3)
for i in range(len(nfs)-1)
] + [nn.AdaptiveAvgPool2d(1), Lambda(flatten), nn.Linear(nfs[-1], data.c)]
def get_cnn_model(data, nfs): return nn.Sequential(*get_cnn_layers(data, nfs))
###Output
_____no_output_____
###Markdown
And this helper function will quickly give us everything needed to run the training.
###Code
#export
def get_runner(model, data, lr=0.6, cbs=None, opt_func=None, loss_func = F.cross_entropy):
if opt_func is None: opt_func = optim.SGD
opt = opt_func(model.parameters(), lr=lr)
learn = Learner(model, opt, loss_func, data)
return learn, Runner(cb_funcs=listify(cbs))
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, data, lr=0.4, cbs=cbfs)
model
run.fit(3, learn)
###Output
train: [1.81677875, tensor(0.3958, device='cuda:0')]
valid: [0.689938623046875, tensor(0.7734, device='cuda:0')]
train: [0.40679828125, tensor(0.8776, device='cuda:0')]
valid: [0.2156467529296875, tensor(0.9328, device='cuda:0')]
train: [0.2014695703125, tensor(0.9398, device='cuda:0')]
valid: [0.221406982421875, tensor(0.9305, device='cuda:0')]
###Markdown
Hooks Manual insertion Let's say we want to do some telemetry, and want the mean and standard deviation of each activations in the model. First we can do it manually like this:
###Code
class SequentialModel(nn.Module):
def __init__(self, *layers):
super().__init__()
self.layers = nn.ModuleList(layers)
self.act_means = [[] for _ in layers]
self.act_stds = [[] for _ in layers]
def __call__(self, x):
for i,l in enumerate(self.layers):
x = l(x)
if self.training:
self.act_means[i].append(x.data.mean())
self.act_stds [i].append(x.data.std ())
return x
def __iter__(self): return iter(self.layers)
model = SequentialModel(*get_cnn_layers(data, nfs))
learn,run = get_runner(model, data, lr=0.9, cbs=cbfs)
run.fit(2, learn)
###Output
train: [2.12532046875, tensor(0.2432, device='cuda:0')]
valid: [1.35027255859375, tensor(0.5570, device='cuda:0')]
train: [0.51112546875, tensor(0.8328, device='cuda:0')]
valid: [0.2070518310546875, tensor(0.9335, device='cuda:0')]
###Markdown
Now we can have a look at the means and stds of the activations at the beginning of training.
###Code
for l in model.act_means: plt.plot(l)
plt.legend(range(6));
for l in model.act_stds: plt.plot(l)
plt.legend(range(6));
for l in model.act_means: plt.plot(l[:10])
plt.legend(range(6));
for l in model.act_stds: plt.plot(l[:10])
plt.legend(range(6));
###Output
_____no_output_____
###Markdown
Pytorch hooks Hooks are PyTorch object you can add to any nn.Module. A hook will be called when a layer, it is registered to, is executed during the forward pass (forward hook) or the backward pass (backward hook).Hooks don't require us to rewrite the model.
###Code
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, data, lr=0.5, cbs=cbfs)
act_means = [[] for _ in model]
act_stds = [[] for _ in model]
###Output
_____no_output_____
###Markdown
A hook is attached to a layer, and needs to have a function that takes three arguments: module, input, output. Here we store the mean and std of the output in the correct position of our list.
###Code
def append_stats(i, mod, inp, outp):
if mod.training:
act_means[i].append(outp.data.mean())
act_stds [i].append(outp.data.std())
for i,m in enumerate(model): m.register_forward_hook(partial(append_stats, i))
run.fit(1, learn)
for o in act_means: plt.plot(o)
plt.legend(range(5));
###Output
_____no_output_____
###Markdown
Hook class We can refactor this in a Hook class. It's very important to remove the hooks when they are deleted, otherwise there will be references kept and the memory won't be properly released when your model is deleted.
###Code
#export
def children(m): return list(m.children())
class Hook():
def __init__(self, m, f): self.hook = m.register_forward_hook(partial(f, self))
def remove(self): self.hook.remove()
def __del__(self): self.remove()
def append_stats(hook, mod, inp, outp):
if not hasattr(hook,'stats'): hook.stats = ([],[])
means,stds = hook.stats
if mod.training:
means.append(outp.data.mean())
stds .append(outp.data.std())
###Output
_____no_output_____
###Markdown
NB: In fastai we use a `bool` param to choose whether to make it a forward or backward hook. In the above version we're only supporting forward hooks.
###Code
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, data, lr=0.5, cbs=cbfs)
hooks = [Hook(l, append_stats) for l in children(model[:4])]
run.fit(1, learn)
for h in hooks:
plt.plot(h.stats[0])
h.remove()
plt.legend(range(4));
###Output
_____no_output_____
###Markdown
A Hooks class Let's design our own class that can contain a list of objects. It will behave a bit like a numpy array in the sense that we can index into it via:- a single index- a slice (like 1:5)- a list of indices- a mask of indices (`[True,False,False,True,...]`)The `__iter__` method is there to be able to do things like `for x in ...`.
###Code
#export
class ListContainer():
def __init__(self, items): self.items = listify(items)
def __getitem__(self, idx):
if isinstance(idx, (int,slice)): return self.items[idx]
if isinstance(idx[0],bool):
assert len(idx)==len(self) # bool mask
return [o for m,o in zip(idx,self.items) if m]
return [self.items[i] for i in idx]
def __len__(self): return len(self.items)
def __iter__(self): return iter(self.items)
def __setitem__(self, i, o): self.items[i] = o
def __delitem__(self, i): del(self.items[i])
def __repr__(self):
res = f'{self.__class__.__name__} ({len(self)} items)\n{self.items[:10]}'
if len(self)>10: res = res[:-1]+ '...]'
return res
ListContainer(range(10))
ListContainer(range(100))
t = ListContainer(range(10))
t[[1,2]], t[[False]*8 + [True,False]]
###Output
_____no_output_____
###Markdown
We can use it to write a `Hooks` class that contains several hooks. We will also use it in the next notebook as a container for our objects in the data block API.
###Code
#export
from torch.nn import init
class Hooks(ListContainer):
def __init__(self, ms, f): super().__init__([Hook(m, f) for m in ms])
def __enter__(self, *args): return self
def __exit__ (self, *args): self.remove()
def __del__(self): self.remove()
def __delitem__(self, i):
self[i].remove()
super().__delitem__(i)
def remove(self):
for h in self: h.remove()
model = get_cnn_model(data, nfs).cuda()
learn,run = get_runner(model, data, lr=0.9, cbs=cbfs)
hooks = Hooks(model, append_stats)
hooks
hooks.remove()
x,y = next(iter(data.train_dl))
x = mnist_resize(x).cuda()
x.mean(),x.std()
p = model[0](x)
p.mean(),p.std()
for l in model:
if isinstance(l, nn.Sequential):
init.kaiming_normal_(l[0].weight)
l[0].bias.data.zero_()
p = model[0](x)
p.mean(),p.std()
###Output
_____no_output_____
###Markdown
Having given an `__enter__` and `__exit__` method to our `Hooks` class, we can use it as a context manager. This makes sure that onces we are out of the `with` block, all the hooks have been removed and aren't there to pollute our memory.
###Code
with Hooks(model, append_stats) as hooks:
run.fit(2, learn)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss = h.stats
ax0.plot(ms[:10])
ax1.plot(ss[:10])
plt.legend(range(6));
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss = h.stats
ax0.plot(ms)
ax1.plot(ss)
plt.legend(range(6));
###Output
train: [1.7958403125, tensor(0.4004, device='cuda:0')]
valid: [0.447680517578125, tensor(0.8629, device='cuda:0')]
train: [0.595609609375, tensor(0.8183, device='cuda:0')]
valid: [0.569426123046875, tensor(0.8380, device='cuda:0')]
###Markdown
Other statistics Let's store more than the means and stds and plot histograms of our activations now.
###Code
def append_stats(hook, mod, inp, outp):
if not hasattr(hook,'stats'): hook.stats = ([],[],[])
means,stds,hists = hook.stats
if mod.training:
means.append(outp.data.mean().cpu())
stds .append(outp.data.std().cpu())
hists.append(outp.data.cpu().histc(40,0,10)) #histc isn't implemented on the GPU
model = get_cnn_model(data, nfs).cuda()
learn,run = get_runner(model, data, lr=0.9, cbs=cbfs)
for l in model:
if isinstance(l, nn.Sequential):
init.kaiming_normal_(l[0].weight)
l[0].bias.data.zero_()
with Hooks(model, append_stats) as hooks: run.fit(1, learn)
# Thanks to @ste for initial version of histgram plotting code
def get_hist(h): return torch.stack(h.stats[2]).t().float().log1p()
fig,axes = plt.subplots(2,2, figsize=(15,6))
for ax,h in zip(axes.flatten(), hooks[:4]):
ax.imshow(get_hist(h), origin='lower')
ax.axis('off')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
From the histograms, we can easily get more informations like the min or max of the activations
###Code
def get_min(h):
h1 = torch.stack(h.stats[2]).t().float()
return h1[:2].sum(0)/h1.sum(0)
fig,axes = plt.subplots(2,2, figsize=(15,6))
for ax,h in zip(axes.flatten(), hooks[:4]):
ax.plot(get_min(h))
ax.set_ylim(0,1)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Generalized ReLU Now let's use our model with a generalized ReLU that can be shifted and with maximum value.
###Code
#export
def get_cnn_layers(data, nfs, layer, **kwargs):
nfs = [1] + nfs
return [layer(nfs[i], nfs[i+1], 5 if i==0 else 3, **kwargs)
for i in range(len(nfs)-1)] + [
nn.AdaptiveAvgPool2d(1), Lambda(flatten), nn.Linear(nfs[-1], data.c)]
def conv_layer(ni, nf, ks=3, stride=2, **kwargs):
return nn.Sequential(
nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride), GeneralRelu(**kwargs))
class GeneralRelu(nn.Module):
def __init__(self, leak=None, sub=None, maxv=None):
super().__init__()
self.leak,self.sub,self.maxv = leak,sub,maxv
def forward(self, x):
x = F.leaky_relu(x,self.leak) if self.leak is not None else F.relu(x)
if self.sub is not None: x.sub_(self.sub)
if self.maxv is not None: x.clamp_max_(self.maxv)
return x
def init_cnn(m, uniform=False):
f = init.kaiming_uniform_ if uniform else init.kaiming_normal_
for l in m:
if isinstance(l, nn.Sequential):
f(l[0].weight, a=0.1)
l[0].bias.data.zero_()
def get_cnn_model(data, nfs, layer, **kwargs):
return nn.Sequential(*get_cnn_layers(data, nfs, layer, **kwargs))
def append_stats(hook, mod, inp, outp):
if not hasattr(hook,'stats'): hook.stats = ([],[],[])
means,stds,hists = hook.stats
if mod.training:
means.append(outp.data.mean().cpu())
stds .append(outp.data.std().cpu())
hists.append(outp.data.cpu().histc(40,-7,7))
model = get_cnn_model(data, nfs, conv_layer, leak=0.1, sub=0.4, maxv=6.)
init_cnn(model)
learn,run = get_runner(model, data, lr=0.9, cbs=cbfs)
with Hooks(model, append_stats) as hooks:
run.fit(1, learn)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss,hi = h.stats
ax0.plot(ms[:10])
ax1.plot(ss[:10])
plt.legend(range(5));
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss,hi = h.stats
ax0.plot(ms)
ax1.plot(ss)
plt.legend(range(5));
fig,axes = plt.subplots(2,2, figsize=(15,6))
for ax,h in zip(axes.flatten(), hooks[:4]):
ax.imshow(get_hist(h), origin='lower')
ax.axis('off')
plt.tight_layout()
def get_min(h):
h1 = torch.stack(h.stats[2]).t().float()
return h1[19:22].sum(0)/h1.sum(0)
fig,axes = plt.subplots(2,2, figsize=(15,6))
for ax,h in zip(axes.flatten(), hooks[:4]):
ax.plot(get_min(h))
ax.set_ylim(0,1)
plt.tight_layout()
#export
def get_learn_run(nfs, data, lr, layer, cbs=None, opt_func=None, uniform=False, **kwargs):
model = get_cnn_model(data, nfs, layer, **kwargs)
init_cnn(model, uniform=uniform)
return get_runner(model, data, lr=lr, cbs=cbs, opt_func=opt_func)
sched = combine_scheds([0.5, 0.5], [sched_cos(0.2, 1.), sched_cos(1., 0.1)])
learn,run = get_learn_run(nfs, data, 1., conv_layer, cbs=cbfs+[partial(ParamScheduler,'lr', sched)])
run.fit(8, learn)
###Output
train: [1.213843359375, tensor(0.6207, device='cuda:0')]
valid: [0.308865869140625, tensor(0.9118, device='cuda:0')]
train: [0.31337412109375, tensor(0.9044, device='cuda:0')]
valid: [0.18531549072265624, tensor(0.9418, device='cuda:0')]
train: [0.514809609375, tensor(0.8428, device='cuda:0')]
valid: [0.3498984619140625, tensor(0.8917, device='cuda:0')]
train: [0.534424921875, tensor(0.8684, device='cuda:0')]
valid: [2.154244140625, tensor(0.2246, device='cuda:0')]
train: [1.1689215625, tensor(0.6173, device='cuda:0')]
valid: [0.2096001708984375, tensor(0.9354, device='cuda:0')]
train: [0.18139208984375, tensor(0.9438, device='cuda:0')]
valid: [0.14386746826171876, tensor(0.9588, device='cuda:0')]
train: [0.121735712890625, tensor(0.9626, device='cuda:0')]
valid: [0.11306314697265625, tensor(0.9673, device='cuda:0')]
train: [0.10355359375, tensor(0.9679, device='cuda:0')]
valid: [0.10647432861328125, tensor(0.9694, device='cuda:0')]
###Markdown
Uniform init may provide more useful initial weights (normal distribution puts a lot of them at 0).
###Code
learn,run = get_learn_run(nfs, data, 1., conv_layer, uniform=True,
cbs=cbfs+[partial(ParamScheduler,'lr', sched)])
run.fit(8, learn)
###Output
train: [1.041688828125, tensor(0.6662, device='cuda:0')]
valid: [0.446877099609375, tensor(0.8561, device='cuda:0')]
train: [0.3849487890625, tensor(0.8829, device='cuda:0')]
valid: [0.235209033203125, tensor(0.9319, device='cuda:0')]
train: [0.384751484375, tensor(0.8856, device='cuda:0')]
valid: [0.5723826171875, tensor(0.8181, device='cuda:0')]
train: [0.262653671875, tensor(0.9206, device='cuda:0')]
valid: [0.11083226318359375, tensor(0.9657, device='cuda:0')]
train: [0.09154287109375, tensor(0.9716, device='cuda:0')]
valid: [0.08876705322265625, tensor(0.9735, device='cuda:0')]
train: [0.06415849609375, tensor(0.9804, device='cuda:0')]
valid: [0.06935145263671876, tensor(0.9801, device='cuda:0')]
train: [0.0487696728515625, tensor(0.9850, device='cuda:0')]
valid: [0.06632761840820313, tensor(0.9805, device='cuda:0')]
train: [0.04038564697265625, tensor(0.9885, device='cuda:0')]
valid: [0.06507407836914063, tensor(0.9818, device='cuda:0')]
###Markdown
Export Here's a handy way to export our module without needing to update the file name - after we define this, we can just use `nb_auto_export()` in the future (h/t Stas Bekman):
###Code
#export
from IPython.display import display, Javascript
def nb_auto_export():
display(Javascript("""{
const ip = IPython.notebook
if (ip) {
ip.save_notebook()
console.log('a')
const s = `!python notebook2script.py ${ip.notebook_name}`
if (ip.kernel) { ip.kernel.execute(s) }
}
}"""))
nb_auto_export()
###Output
_____no_output_____
###Markdown
ConvNet
###Code
x_train,y_train,x_valid,y_valid = get_data()
#export
def normalize_to(train, valid):
m,s = train.mean(),train.std()
return normalize(train, m, s), normalize(valid, m, s)
x_train,x_valid = normalize_to(x_train,x_valid)
train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)
x_train.mean(),x_train.std()
nh,bs = 50,512
c = y_train.max().item()+1
loss_func = F.cross_entropy
data = DataBunch(*get_dls(train_ds, valid_ds, bs), c)
#export
class Lambda(nn.Module):
def __init__(self, func):
super().__init__()
self.func = func
def forward(self, x): return self.func(x)
def flatten(x): return x.view(x.shape[0], -1)
def mnist_resize(x): return x.view(-1, 1, 28, 28)
def get_cnn_model(data):
return nn.Sequential(
Lambda(mnist_resize),
nn.Conv2d( 1, 8, 5, padding=2,stride=2), nn.ReLU(), #14
nn.Conv2d( 8,16, 3, padding=1,stride=2), nn.ReLU(), # 7
nn.Conv2d(16,32, 3, padding=1,stride=2), nn.ReLU(), # 4
nn.Conv2d(32,32, 3, padding=1,stride=2), nn.ReLU(), # 2
nn.AdaptiveAvgPool2d(1),
Lambda(flatten),
nn.Linear(32,data.c)
)
model = get_cnn_model(data)
cbfs = [Recorder, partial(AvgStatsCallback,accuracy)]
opt = optim.SGD(model.parameters(), lr=0.4)
learn = Learner(model, opt, loss_func, data)
run = Runner(cb_funcs=cbfs)
%time run.fit(1, learn)
###Output
train: [1.80052609375, tensor(0.3767)]
valid: [0.56441240234375, tensor(0.8122)]
CPU times: user 9.75 s, sys: 6.02 s, total: 15.8 s
Wall time: 5.29 s
###Markdown
CUDA
###Code
#export
class CudaCallback(Callback):
def begin_fit(self): self.model.cuda()
def begin_batch(self): self.run.xb,self.run.yb = self.xb.cuda(),self.yb.cuda()
cbfs.append(CudaCallback)
model = get_cnn_model(data)
opt = optim.SGD(model.parameters(), lr=0.4)
learn = Learner(model, opt, loss_func, data)
run = Runner(cb_funcs=cbfs)
%time run.fit(3, learn)
###Output
train: [1.70935421875, tensor(0.4247, device='cuda:0')]
valid: [0.545311962890625, tensor(0.8320, device='cuda:0')]
train: [0.33943984375, tensor(0.8975, device='cuda:0')]
valid: [0.2418747314453125, tensor(0.9261, device='cuda:0')]
train: [0.186439921875, tensor(0.9447, device='cuda:0')]
valid: [0.12864632568359374, tensor(0.9615, device='cuda:0')]
CPU times: user 4.32 s, sys: 1.25 s, total: 5.57 s
Wall time: 5.49 s
###Markdown
Refactor model
###Code
def conv2d(ni, nf, ks=3, stride=2):
return nn.Sequential(
nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride), nn.ReLU())
#export
class BatchTransformXCallback(Callback):
_order=2
def __init__(self, tfm): self.tfm = tfm
def begin_batch(self): self.run.xb = self.tfm(self.xb)
def view_tfm(*size):
def _inner(x): return x.view(*((-1,)+size))
return _inner
mnist_view = view_tfm(1,28,28)
cbfs.append(partial(BatchTransformXCallback, mnist_view))
###Output
_____no_output_____
###Markdown
This model can now work on any size input:
###Code
nfs = [8,16,32,32]
def get_cnn_layers(data, nfs):
nfs = [1] + nfs
return [
conv2d(nfs[i], nfs[i+1], 5 if i==0 else 3)
for i in range(len(nfs)-1)
] + [nn.AdaptiveAvgPool2d(1), Lambda(flatten), nn.Linear(nfs[-1], data.c)]
def get_cnn_model(data, nfs): return nn.Sequential(*get_cnn_layers(data, nfs))
#export
def get_runner(model, data, lr=0.6, cbs=None, loss_func = F.cross_entropy):
opt = optim.SGD(model.parameters(), lr=lr)
learn = Learner(model, opt, loss_func, data)
return learn, Runner(cb_funcs=cbfs + listify(cbs))
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, lr=0.4)
run.fit(3, learn)
###Output
train: [2.02259859375, tensor(0.3017, device='cuda:0')]
valid: [1.07844677734375, tensor(0.6620, device='cuda:0')]
train: [0.4846076171875, tensor(0.8499, device='cuda:0')]
valid: [0.2560584716796875, tensor(0.9213, device='cuda:0')]
train: [0.22737287109375, tensor(0.9305, device='cuda:0')]
valid: [0.14688985595703125, tensor(0.9572, device='cuda:0')]
###Markdown
Hooks Manual insertion
###Code
class SequentialModel(nn.Module):
def __init__(self, *layers):
super().__init__()
self.layers = nn.ModuleList(layers)
self.act_means = [[] for _ in layers]
self.act_stds = [[] for _ in layers]
def __call__(self, x):
for i,l in enumerate(self.layers):
x = l(x)
self.act_means[i].append(x.data.mean())
self.act_stds [i].append(x.data.std ())
return x
def __iter__(self): return iter(self.layers)
model = SequentialModel(*get_cnn_layers(data, nfs))
learn,run = get_runner(model, lr=0.9)
run.fit(2, learn)
for l in model.act_means: plt.plot(l)
plt.legend(range(5));
for l in model.act_stds: plt.plot(l)
plt.legend(range(5));
for l in model.act_means: plt.plot(l[:10])
plt.legend(range(5));
for l in model.act_stds: plt.plot(l[:10])
plt.legend(range(5));
###Output
_____no_output_____
###Markdown
Pytorch hooks
###Code
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, lr=0.5)
act_means = [[] for _ in model]
act_stds = [[] for _ in model]
def append_stats(i, mod, inp, outp):
act_means[i].append(outp.data.mean())
act_stds [i].append(outp.data.std())
for i,m in enumerate(model): m.register_forward_hook(partial(append_stats, i))
run.fit(1, learn)
for o in act_means: plt.plot(o)
plt.legend(range(5));
###Output
_____no_output_____
###Markdown
Hook class
###Code
#export
def children(m): return list(m.children())
class Hook():
def __init__(self, m, f): self.hook = m.register_forward_hook(partial(f, self))
def remove(self): self.hook.remove()
def __del__(self): self.remove()
def append_stats(hook, mod, inp, outp):
if not hasattr(hook,'stats'): hook.stats = ([],[])
means,stds = hook.stats
means.append(outp.data.mean())
stds .append(outp.data.std())
model = get_cnn_model(data, nfs)
learn,run = get_runner(model, lr=0.5)
hooks = [Hook(l, append_stats) for l in children(model[:4])]
run.fit(1, learn)
for h in hooks:
plt.plot(h.stats[0])
h.remove()
plt.legend(range(4));
###Output
_____no_output_____
###Markdown
A Hooks class
###Code
#export
class ListContainer():
def __init__(self, items): self.items = items
def __getitem__(self,i): return self.items[i]
def __len__(self): return len(self.items)
def __iter__(self): return iter(self.items)
def __setitem__(self, i, o): self.items[i] = o
def __delitem__(self, i): del(self.items[i])
def __repr__(self): return f"{self.__class__.__name__} ({len(self)} items)"
#export
from torch.nn import init
class Hooks(ListContainer):
def __init__(self, ms, f): super().__init__([Hook(m, f) for m in ms])
def __delitem__(self, i):
self[i].remove()
super().__delitem__(i)
def __enter__(self, *args): return self
def __exit__ (self, *args): self.remove()
def remove(self):
for h in self: h.remove()
model = get_cnn_model(data, nfs).cuda()
learn,run = get_runner(model, lr=0.9)
hooks = Hooks(model, append_stats)
hooks
hooks.remove()
x,y = next(iter(data.train_dl))
x = mnist_resize(x).cuda()
x.mean(),x.std()
p = model[0](x)
p.mean(),p.std()
for l in model:
if isinstance(l, nn.Sequential): init.kaiming_normal_(l[0].weight)
p = model[0](x)
p.mean(),p.std()
with Hooks(model, append_stats) as hooks:
run.fit(2, learn)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss = h.stats
ax0.plot(ms[:10])
ax1.plot(ss[:10])
h.remove()
plt.legend(range(5));
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss = h.stats
ax0.plot(ms)
ax1.plot(ss)
plt.legend(range(5));
###Output
train: [1.6772765625, tensor(0.4324, device='cuda:0')]
valid: [0.521271728515625, tensor(0.8343, device='cuda:0')]
train: [0.3992796875, tensor(0.8749, device='cuda:0')]
valid: [0.239741650390625, tensor(0.9252, device='cuda:0')]
###Markdown
Other statistics - pct < x- percentiles Generalized ReLU
###Code
#export
def get_cnn_layers(data, nfs, **kwargs):
nfs = [1] + nfs
return [conv2d(nfs[i], nfs[i+1], 5 if i==0 else 3, **kwargs)
for i in range(len(nfs)-1)] + [
nn.AdaptiveAvgPool2d(1), Lambda(flatten), nn.Linear(nfs[-1], data.c)]
def get_cnn_model(data, nfs, **kwargs): return nn.Sequential(*get_cnn_layers(data, nfs, **kwargs))
def conv2d(ni, nf, ks=3, stride=2, **kwargs):
return nn.Sequential(
nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride), GeneralRelu(**kwargs))
class GeneralRelu(nn.Module):
def __init__(self, leak=None, sub=None, maxv=None):
super().__init__()
self.leak,self.sub,self.maxv = leak,sub,maxv
def forward(self, x):
x = F.leaky_relu(x,self.leak) if self.leak is not None else F.relu(x)
if self.sub is not None: x.sub_(self.sub)
if self.maxv is not None: x.clamp_max_(self.maxv)
return x
model = SequentialModel(*get_cnn_layers(data, nfs, leak=0.1, sub=0.4, maxv=6.))
for l in model:
if isinstance(l, nn.Sequential): init.kaiming_normal_(l[0].weight, a=0.1)
learn,run = get_runner(model, lr=0.9)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
with Hooks(model, append_stats) as hooks:
run.fit(1, learn)
for h in hooks:
ms,ss = h.stats
ax0.plot(ms[:10])
ax1.plot(ss[:10])
h.remove()
ax0.legend(range(6));
#export
def init_cnn(m):
for l in m:
if isinstance(l, nn.Sequential):
init.kaiming_normal_(l[0].weight, a=0.1)
l[0].weight.data
model = nn.Sequential(*get_cnn_layers(data, nfs, leak=0.1, sub=0.4, maxv=6.))
init_cnn(model)
learn,run = get_runner(model, lr=0.9)
with Hooks(model, append_stats) as hooks:
run.fit(2, learn)
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss = h.stats
ax0.plot(ms[:10])
ax1.plot(ss[:10])
h.remove()
plt.legend(range(5));
fig,(ax0,ax1) = plt.subplots(1,2, figsize=(10,4))
for h in hooks:
ms,ss = h.stats
ax0.plot(ms)
ax1.plot(ss)
plt.legend(range(5));
#export
def get_learn_run(nfs, data, lr, cbs=None):
model = nn.Sequential(*get_cnn_layers(data, nfs, leak=0.1, sub=0.4, maxv=6.))
init_cnn(model)
return get_runner(model, data, lr=lr, cbs=cbs)
sched = combine_scheds([0.5, 0.5], [sched_cos(0.2, 1.), sched_cos(1., 0.1)])
learn,run = get_learn_run(nfs, 1., cbs=partial(ParamScheduler,'lr', sched))
run.fit(8, learn)
###Output
train: [0.79113390625, tensor(0.7699, device='cuda:0')]
valid: [0.3256356201171875, tensor(0.8944, device='cuda:0')]
train: [0.26623271484375, tensor(0.9196, device='cuda:0')]
valid: [0.259093212890625, tensor(0.9182, device='cuda:0')]
train: [0.18276099609375, tensor(0.9434, device='cuda:0')]
valid: [0.12496890869140626, tensor(0.9649, device='cuda:0')]
train: [0.12673763671875, tensor(0.9612, device='cuda:0')]
valid: [0.1013431396484375, tensor(0.9686, device='cuda:0')]
train: [0.0822570703125, tensor(0.9746, device='cuda:0')]
valid: [0.077582763671875, tensor(0.9791, device='cuda:0')]
train: [0.0612361328125, tensor(0.9813, device='cuda:0')]
valid: [0.06586860961914062, tensor(0.9809, device='cuda:0')]
train: [0.048166259765625, tensor(0.9855, device='cuda:0')]
valid: [0.0648989990234375, tensor(0.9819, device='cuda:0')]
train: [0.041812529296875, tensor(0.9882, device='cuda:0')]
valid: [0.06230037841796875, tensor(0.9827, device='cuda:0')]
###Markdown
Export
###Code
!python notebook2script.py 06_cuda_cnn_hooks_init.ipynb
###Output
Converted 06_cuda_cnn_hooks_init.ipynb to nb_06.py
|
colab/Image_Inpainting_with_GMCNN_model.ipynb | ###Markdown
Download the current version of GMCNN pipeline from GitHub
###Code
!git clone https://github.com/tlatkowski/inpainting-gmcnn-keras.git
!ls
###Output
_____no_output_____
###Markdown
Download and extract NVIDIA's testing mask dataset
###Code
!wget http://masc.cs.gmu.edu/wiki/uploads/partialconv/mask.zip
!unzip -q mask.zip
!ls
###Output
_____no_output_____
###Markdown
Download and extract dataset with training images (Places356)
###Code
!wget http://data.csail.mit.edu/places/places365/val_large.tar
!tar -xf val_large.tar
!mkdir images
!cp -a val_large/ images
!ls
###Output
_____no_output_____
###Markdown
Install all requirements
###Code
!pip install -r inpainting-gmcnn-keras/requirements/requirements.txt
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip ngrok-stable-linux-amd64.zip
LOG_DIR = './outputs'
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(LOG_DIR)
)
get_ipython().system_raw('./ngrok http 6006 &')
! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
!mkdir config
!cp inpainting-gmcnn-keras/config/main_config.ini config
%%writefile config/main_config.ini
[TRAINING]
WGAN_TRAINING_RATIO = 1
NUM_EPOCHS = 5
BATCH_SIZE = 4
IMG_HEIGHT = 256
IMG_WIDTH = 256
NUM_CHANNELS = 3
LEARNING_RATE = 0.0001
SAVE_MODEL_STEPS_PERIOD = 1000
[MODEL]
ADD_MASK_AS_GENERATOR_INPUT = True
GRADIENT_PENALTY_LOSS_WEIGHT = 10
ID_MRF_LOSS_WEIGHT = 0.05
ADVERSARIAL_LOSS_WEIGHT = 0.001
NN_STRETCH_SIGMA = 0.5
VGG_16_LAYERS = 3,6,10
ID_MRF_STYLE_WEIGHT = 1.0
ID_MRF_CONTENT_WEIGHT = 1.0
NUM_GAUSSIAN_STEPS = 3
GAUSSIAN_KERNEL_SIZE = 32
GAUSSIAN_KERNEL_STD = 40.0
!ls
###Output
_____no_output_____
###Markdown
Train generator with only confidence reconstruction loss for 5 epochs
###Code
!python inpainting-gmcnn-keras/runner.py --train_path images --mask_path mask --experiment_name "gmcnn256x256" -warm_up_generator
###Output
_____no_output_____
###Markdown
Visualize predicted images for specific training steps in warm-up generator mode
###Code
!ls outputs/gmcnn256x256/predicted_pics/warm_up_generator/
from IPython.display import Image
Image('outputs/gmcnn256x256/predicted_pics/warm_up_generator/step_3000.png')
Image('outputs/gmcnn256x256/predicted_pics/warm_up_generator/step_5000.png')
###Output
_____no_output_____
###Markdown
Full Wasserstein GAN training mode: generator, local and global discriminators
###Code
%%writefile config/main_config.ini
[TRAINING]
WGAN_TRAINING_RATIO = 5
NUM_EPOCHS = 5
BATCH_SIZE = 4
IMG_HEIGHT = 256
IMG_WIDTH = 256
NUM_CHANNELS = 3
LEARNING_RATE = 0.0002
SAVE_MODEL_STEPS_PERIOD = 500
[MODEL]
ADD_MASK_AS_GENERATOR_INPUT = True
GRADIENT_PENALTY_LOSS_WEIGHT = 10
ID_MRF_LOSS_WEIGHT = 0.05
ADVERSARIAL_LOSS_WEIGHT = 0.0005
NN_STRETCH_SIGMA = 0.5
VGG_16_LAYERS = 3,6,10
ID_MRF_STYLE_WEIGHT = 1.0
ID_MRF_CONTENT_WEIGHT = 1.0
NUM_GAUSSIAN_STEPS = 3
GAUSSIAN_KERNEL_SIZE = 32
GAUSSIAN_KERNEL_STD = 40.0
!python inpainting-gmcnn-keras/runner.py --train_path images --mask_path mask -from_weights --experiment_name "gmcnn256x256"
###Output
_____no_output_____
###Markdown
Vizualise results of full model training
###Code
!ls outputs/gmcnn256x256/predicted_pics/wgan/
Image('outputs/gmcnn256x256/predicted_pics/wgan/step_1000.png')
Image('outputs/gmcnn256x256/predicted_pics/wgan/step_2000.png')
###Output
_____no_output_____
###Markdown
Create zip file with model results and checkpoints
###Code
!zip -r outputs.zip outputs
ls
!rm -rf outputs/
!python inpainting-gmcnn-keras/runner.py --train_path images --mask_path mask -warm_up_generator -from_weights
###Output
_____no_output_____
###Markdown
Download the current version of GMCNN pipeline from GitHub
###Code
!git clone https://github.com/rohangupta16feb/Image-Inpainting-using-GAN-Keras.git
!ls
###Output
_____no_output_____
###Markdown
Download and extract NVIDIA's testing mask dataset
###Code
!wget http://masc.cs.gmu.edu/wiki/uploads/partialconv/mask.zip
!unzip -q mask.zip
!ls
###Output
_____no_output_____
###Markdown
Download and extract dataset with training images (Places356)
###Code
!wget http://data.csail.mit.edu/places/places365/val_large.tar
!tar -xf val_large.tar
!mkdir images
!cp -a val_large/ images
!ls
###Output
_____no_output_____
###Markdown
Install all requirements
###Code
!pip install -r Image-Inpainting-using-GAN-Keras/requirements/requirements.txt
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip ngrok-stable-linux-amd64.zip
LOG_DIR = './outputs'
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(LOG_DIR)
)
get_ipython().system_raw('./ngrok http 6006 &')
! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
!mkdir config
!cp Image-Inpainting-using-GAN-Keras/config/main_config.ini config
%%writefile config/main_config.ini
[TRAINING]
WGAN_TRAINING_RATIO = 1
NUM_EPOCHS = 5
BATCH_SIZE = 4
IMG_HEIGHT = 256
IMG_WIDTH = 256
NUM_CHANNELS = 3
LEARNING_RATE = 0.0001
SAVE_MODEL_STEPS_PERIOD = 1000
[MODEL]
ADD_MASK_AS_GENERATOR_INPUT = True
GRADIENT_PENALTY_LOSS_WEIGHT = 10
ID_MRF_LOSS_WEIGHT = 0.05
ADVERSARIAL_LOSS_WEIGHT = 0.001
NN_STRETCH_SIGMA = 0.5
VGG_16_LAYERS = 3,6,10
ID_MRF_STYLE_WEIGHT = 1.0
ID_MRF_CONTENT_WEIGHT = 1.0
NUM_GAUSSIAN_STEPS = 3
GAUSSIAN_KERNEL_SIZE = 32
GAUSSIAN_KERNEL_STD = 40.0
!ls
###Output
_____no_output_____
###Markdown
Train generator with only confidence reconstruction loss for 5 epochs
###Code
!python Image-Inpainting-using-GAN-Keras/runner.py --train_path images --mask_path mask --experiment_name "gmcnn256x256" -warm_up_generator
###Output
_____no_output_____
###Markdown
Visualize predicted images for specific training steps in warm-up generator mode
###Code
!ls outputs/gmcnn256x256/predicted_pics/warm_up_generator/
from IPython.display import Image
Image('outputs/gmcnn256x256/predicted_pics/warm_up_generator/step_3000.png')
Image('outputs/gmcnn256x256/predicted_pics/warm_up_generator/step_5000.png')
###Output
_____no_output_____
###Markdown
Full Wasserstein GAN training mode: generator, local and global discriminators
###Code
%%writefile config/main_config.ini
[TRAINING]
WGAN_TRAINING_RATIO = 5
NUM_EPOCHS = 5
BATCH_SIZE = 4
IMG_HEIGHT = 256
IMG_WIDTH = 256
NUM_CHANNELS = 3
LEARNING_RATE = 0.0002
SAVE_MODEL_STEPS_PERIOD = 500
[MODEL]
ADD_MASK_AS_GENERATOR_INPUT = True
GRADIENT_PENALTY_LOSS_WEIGHT = 10
ID_MRF_LOSS_WEIGHT = 0.05
ADVERSARIAL_LOSS_WEIGHT = 0.0005
NN_STRETCH_SIGMA = 0.5
VGG_16_LAYERS = 3,6,10
ID_MRF_STYLE_WEIGHT = 1.0
ID_MRF_CONTENT_WEIGHT = 1.0
NUM_GAUSSIAN_STEPS = 3
GAUSSIAN_KERNEL_SIZE = 32
GAUSSIAN_KERNEL_STD = 40.0
!python Image-Inpainting-using-GAN-Keras/runner.py --train_path images --mask_path mask -from_weights --experiment_name "gmcnn256x256"
###Output
_____no_output_____
###Markdown
Vizualise results of full model training
###Code
!ls outputs/gmcnn256x256/predicted_pics/wgan/
Image('outputs/gmcnn256x256/predicted_pics/wgan/step_1000.png')
Image('outputs/gmcnn256x256/predicted_pics/wgan/step_2000.png')
###Output
_____no_output_____
###Markdown
Create zip file with model results and checkpoints
###Code
!zip -r outputs.zip outputs
ls
!rm -rf outputs/
!python Image-Inpainting-using-GAN-Keras/runner.py --train_path images --mask_path mask -warm_up_generator -from_weights
###Output
_____no_output_____
###Markdown
Download the current version of GMCNN pipeline from GitHub
###Code
!git clone https://github.com/tlatkowski/inpainting-gmcnn-keras.git
!ls
###Output
Cloning into 'inpainting-gmcnn-keras'...
remote: Enumerating objects: 251, done.[K
remote: Counting objects: 100% (251/251), done.[K
remote: Compressing objects: 100% (168/168), done.[K
remote: Total 251 (delta 126), reused 197 (delta 76), pack-reused 0[K
Receiving objects: 100% (251/251), 4.48 MiB | 26.38 MiB/s, done.
Resolving deltas: 100% (126/126), done.
inpainting-gmcnn-keras sample_data
###Markdown
Download and extract NVIDIA's testing mask dataset
###Code
!wget http://masc.cs.gmu.edu/wiki/uploads/partialconv/mask.zip
!unzip -q mask.zip
!ls
###Output
inpainting-gmcnn-keras mask mask.zip sample_data
###Markdown
Download and extract dataset with training images (Places356)
###Code
!wget http://data.csail.mit.edu/places/places365/val_large.tar
!tar -xf val_large.tar
!mkdir images
!cp -a val_large/ images
!ls
###Output
inpainting-gmcnn-keras mask mask.zip sample_data val_large val_large.tar
###Markdown
Install all requirements
###Code
!pip install -r inpainting-gmcnn-keras/requirements/requirements.txt
!echo $PYTHONPATH
import sys
sys.path.insert(0,'import os
os.environ['PYTHONPATH'] = "/usr/bin/python3"')
!mkdir config
!cp inpainting-gmcnn-keras/config/main_config.ini config
%%writefile config/main_config.ini
[TRAINING]
WGAN_TRAINING_RATIO = 5
NUM_EPOCHS = 5
BATCH_SIZE = 1
IMG_HEIGHT = 256
IMG_WIDTH = 256
NUM_CHANNELS = 3
LEARNING_RATE = 0.0001
SAVE_MODEL_STEPS_PERIOD = 1000
[MODEL]
GRADIENT_PENALTY_LOSS_WEIGHT = 10
ID_MRF_LOSS_WEIGHT = 0.05
ADVERSARIAL_LOSS_WEIGHT = 0.001
NN_STRETCH_SIGMA = 0.5
VGG_16_LAYERS = 3,6,10
ID_MRF_STYLE_WEIGHT = 1.0
ID_MRF_CONTENT_WEIGHT = 1.0
NUM_GAUSSIAN_STEPS = 3
GAUSSIAN_KERNEL_SIZE = 32
GAUSSIAN_KERNEL_STD = 40.0
###Output
Overwriting config/main_config.ini
###Markdown
Train generator with only confidence reconstruction loss for 5 epochs
###Code
!python inpainting-gmcnn-keras/runner.py --train_path images --mask_path mask -warm_up_generator
###Output
Using TensorFlow backend.
INFO:tensorflow:Setting visible GPU to 0
INFO:tensorflow:Performing generator training only with the reconstruction loss.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
multiply_1 (Multiply) (None, 256, 256, 3) 0 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 128, 128, 64) 4864 multiply_1[0][0]
__________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 128, 128, 64) 0 conv2d_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 64, 64, 128) 204928 leaky_re_lu_1[0][0]
__________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 64, 64, 128) 0 conv2d_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 32, 32, 256) 819456 leaky_re_lu_2[0][0]
__________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 32, 32, 256) 0 conv2d_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 16, 16, 512) 3277312 leaky_re_lu_3[0][0]
__________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, 16, 16, 512) 0 conv2d_4[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 8, 8, 256) 3277056 leaky_re_lu_4[0][0]
__________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU) (None, 8, 8, 256) 0 conv2d_5[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 4, 4, 128) 819328 leaky_re_lu_5[0][0]
__________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU) (None, 4, 4, 128) 0 conv2d_6[0][0]
__________________________________________________________________________________________________
flatten_1 (Flatten) (None, 2048) 0 leaky_re_lu_6[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 1) 2049 flatten_1[0][0]
==================================================================================================
Total params: 8,404,993
Trainable params: 8,404,993
Non-trainable params: 0
__________________________________________________________________________________________________
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) (None, 256, 256, 3) 0
_________________________________________________________________
conv2d_7 (Conv2D) (None, 128, 128, 64) 4864
_________________________________________________________________
leaky_re_lu_7 (LeakyReLU) (None, 128, 128, 64) 0
_________________________________________________________________
conv2d_8 (Conv2D) (None, 64, 64, 128) 204928
_________________________________________________________________
leaky_re_lu_8 (LeakyReLU) (None, 64, 64, 128) 0
_________________________________________________________________
conv2d_9 (Conv2D) (None, 32, 32, 256) 819456
_________________________________________________________________
leaky_re_lu_9 (LeakyReLU) (None, 32, 32, 256) 0
_________________________________________________________________
conv2d_10 (Conv2D) (None, 16, 16, 512) 3277312
_________________________________________________________________
leaky_re_lu_10 (LeakyReLU) (None, 16, 16, 512) 0
_________________________________________________________________
conv2d_11 (Conv2D) (None, 8, 8, 256) 3277056
_________________________________________________________________
leaky_re_lu_11 (LeakyReLU) (None, 8, 8, 256) 0
_________________________________________________________________
conv2d_12 (Conv2D) (None, 4, 4, 128) 819328
_________________________________________________________________
leaky_re_lu_12 (LeakyReLU) (None, 4, 4, 128) 0
_________________________________________________________________
flatten_2 (Flatten) (None, 2048) 0
_________________________________________________________________
dense_2 (Dense) (None, 1) 2049
=================================================================
Total params: 8,404,993
Trainable params: 8,404,993
Non-trainable params: 0
_________________________________________________________________
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_5 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
input_4 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
binary_negation_1 (BinaryNegati (None, 256, 256, 3) 0 input_5[0][0]
__________________________________________________________________________________________________
multiply_2 (Multiply) (None, 256, 256, 3) 0 input_4[0][0]
binary_negation_1[0][0]
__________________________________________________________________________________________________
conv2d_39 (Conv2D) (None, 256, 256, 32) 896 multiply_2[0][0]
__________________________________________________________________________________________________
elu_26 (ELU) (None, 256, 256, 32) 0 conv2d_39[0][0]
__________________________________________________________________________________________________
conv2d_40 (Conv2D) (None, 128, 128, 64) 18496 elu_26[0][0]
__________________________________________________________________________________________________
elu_27 (ELU) (None, 128, 128, 64) 0 conv2d_40[0][0]
__________________________________________________________________________________________________
conv2d_25 (Conv2D) (None, 256, 256, 32) 2432 multiply_2[0][0]
__________________________________________________________________________________________________
conv2d_41 (Conv2D) (None, 128, 128, 64) 36928 elu_27[0][0]
__________________________________________________________________________________________________
elu_12 (ELU) (None, 256, 256, 32) 0 conv2d_25[0][0]
__________________________________________________________________________________________________
elu_28 (ELU) (None, 128, 128, 64) 0 conv2d_41[0][0]
__________________________________________________________________________________________________
conv2d_26 (Conv2D) (None, 128, 128, 64) 51264 elu_12[0][0]
__________________________________________________________________________________________________
conv2d_42 (Conv2D) (None, 64, 64, 128) 73856 elu_28[0][0]
__________________________________________________________________________________________________
elu_13 (ELU) (None, 128, 128, 64) 0 conv2d_26[0][0]
__________________________________________________________________________________________________
elu_29 (ELU) (None, 64, 64, 128) 0 conv2d_42[0][0]
__________________________________________________________________________________________________
conv2d_27 (Conv2D) (None, 128, 128, 64) 102464 elu_13[0][0]
__________________________________________________________________________________________________
conv2d_43 (Conv2D) (None, 64, 64, 128) 147584 elu_29[0][0]
__________________________________________________________________________________________________
elu_14 (ELU) (None, 128, 128, 64) 0 conv2d_27[0][0]
__________________________________________________________________________________________________
elu_30 (ELU) (None, 64, 64, 128) 0 conv2d_43[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 256, 256, 32) 4736 multiply_2[0][0]
__________________________________________________________________________________________________
conv2d_28 (Conv2D) (None, 64, 64, 128) 204928 elu_14[0][0]
__________________________________________________________________________________________________
conv2d_44 (Conv2D) (None, 64, 64, 128) 147584 elu_30[0][0]
__________________________________________________________________________________________________
elu_1 (ELU) (None, 256, 256, 32) 0 conv2d_13[0][0]
__________________________________________________________________________________________________
elu_15 (ELU) (None, 64, 64, 128) 0 conv2d_28[0][0]
__________________________________________________________________________________________________
elu_31 (ELU) (None, 64, 64, 128) 0 conv2d_44[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 128, 128, 64) 100416 elu_1[0][0]
__________________________________________________________________________________________________
conv2d_29 (Conv2D) (None, 64, 64, 128) 409728 elu_15[0][0]
__________________________________________________________________________________________________
conv2d_45 (Conv2D) (None, 64, 64, 128) 147584 elu_31[0][0]
__________________________________________________________________________________________________
elu_2 (ELU) (None, 128, 128, 64) 0 conv2d_14[0][0]
__________________________________________________________________________________________________
elu_16 (ELU) (None, 64, 64, 128) 0 conv2d_29[0][0]
__________________________________________________________________________________________________
elu_32 (ELU) (None, 64, 64, 128) 0 conv2d_45[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 128, 128, 64) 200768 elu_2[0][0]
__________________________________________________________________________________________________
conv2d_30 (Conv2D) (None, 64, 64, 128) 409728 elu_16[0][0]
__________________________________________________________________________________________________
conv2d_46 (Conv2D) (None, 64, 64, 128) 147584 elu_32[0][0]
__________________________________________________________________________________________________
elu_3 (ELU) (None, 128, 128, 64) 0 conv2d_15[0][0]
__________________________________________________________________________________________________
elu_17 (ELU) (None, 64, 64, 128) 0 conv2d_30[0][0]
__________________________________________________________________________________________________
elu_33 (ELU) (None, 64, 64, 128) 0 conv2d_46[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 64, 64, 128) 401536 elu_3[0][0]
__________________________________________________________________________________________________
conv2d_31 (Conv2D) (None, 64, 64, 128) 409728 elu_17[0][0]
__________________________________________________________________________________________________
conv2d_47 (Conv2D) (None, 64, 64, 128) 147584 elu_33[0][0]
__________________________________________________________________________________________________
elu_4 (ELU) (None, 64, 64, 128) 0 conv2d_16[0][0]
__________________________________________________________________________________________________
elu_18 (ELU) (None, 64, 64, 128) 0 conv2d_31[0][0]
__________________________________________________________________________________________________
elu_34 (ELU) (None, 64, 64, 128) 0 conv2d_47[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 64, 64, 128) 802944 elu_4[0][0]
__________________________________________________________________________________________________
conv2d_32 (Conv2D) (None, 64, 64, 128) 409728 elu_18[0][0]
__________________________________________________________________________________________________
conv2d_48 (Conv2D) (None, 64, 64, 128) 147584 elu_34[0][0]
__________________________________________________________________________________________________
elu_5 (ELU) (None, 64, 64, 128) 0 conv2d_17[0][0]
__________________________________________________________________________________________________
elu_19 (ELU) (None, 64, 64, 128) 0 conv2d_32[0][0]
__________________________________________________________________________________________________
elu_35 (ELU) (None, 64, 64, 128) 0 conv2d_48[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 64, 64, 128) 802944 elu_5[0][0]
__________________________________________________________________________________________________
conv2d_33 (Conv2D) (None, 64, 64, 128) 409728 elu_19[0][0]
__________________________________________________________________________________________________
conv2d_49 (Conv2D) (None, 64, 64, 128) 147584 elu_35[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 64, 64, 128) 802944 conv2d_18[0][0]
__________________________________________________________________________________________________
elu_20 (ELU) (None, 64, 64, 128) 0 conv2d_33[0][0]
__________________________________________________________________________________________________
elu_36 (ELU) (None, 64, 64, 128) 0 conv2d_49[0][0]
__________________________________________________________________________________________________
elu_6 (ELU) (None, 64, 64, 128) 0 conv2d_19[0][0]
__________________________________________________________________________________________________
conv2d_34 (Conv2D) (None, 64, 64, 128) 409728 elu_20[0][0]
__________________________________________________________________________________________________
conv2d_50 (Conv2D) (None, 64, 64, 128) 147584 elu_36[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 64, 64, 128) 802944 elu_6[0][0]
__________________________________________________________________________________________________
elu_21 (ELU) (None, 64, 64, 128) 0 conv2d_34[0][0]
__________________________________________________________________________________________________
elu_37 (ELU) (None, 64, 64, 128) 0 conv2d_50[0][0]
__________________________________________________________________________________________________
elu_7 (ELU) (None, 64, 64, 128) 0 conv2d_20[0][0]
__________________________________________________________________________________________________
conv2d_35 (Conv2D) (None, 64, 64, 128) 409728 elu_21[0][0]
__________________________________________________________________________________________________
up_sampling2d_4 (UpSampling2D) (None, 128, 128, 128 0 elu_37[0][0]
__________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, 64, 64, 128) 802944 elu_7[0][0]
__________________________________________________________________________________________________
elu_22 (ELU) (None, 64, 64, 128) 0 conv2d_35[0][0]
__________________________________________________________________________________________________
conv2d_51 (Conv2D) (None, 128, 128, 64) 204864 up_sampling2d_4[0][0]
__________________________________________________________________________________________________
elu_8 (ELU) (None, 64, 64, 128) 0 conv2d_21[0][0]
__________________________________________________________________________________________________
conv2d_36 (Conv2D) (None, 64, 64, 128) 409728 elu_22[0][0]
__________________________________________________________________________________________________
elu_38 (ELU) (None, 128, 128, 64) 0 conv2d_51[0][0]
__________________________________________________________________________________________________
conv2d_22 (Conv2D) (None, 64, 64, 128) 802944 elu_8[0][0]
__________________________________________________________________________________________________
elu_23 (ELU) (None, 64, 64, 128) 0 conv2d_36[0][0]
__________________________________________________________________________________________________
conv2d_52 (Conv2D) (None, 128, 128, 64) 102464 elu_38[0][0]
__________________________________________________________________________________________________
elu_9 (ELU) (None, 64, 64, 128) 0 conv2d_22[0][0]
__________________________________________________________________________________________________
up_sampling2d_2 (UpSampling2D) (None, 128, 128, 128 0 elu_23[0][0]
__________________________________________________________________________________________________
elu_39 (ELU) (None, 128, 128, 64) 0 conv2d_52[0][0]
__________________________________________________________________________________________________
conv2d_23 (Conv2D) (None, 64, 64, 128) 802944 elu_9[0][0]
__________________________________________________________________________________________________
conv2d_37 (Conv2D) (None, 128, 128, 64) 204864 up_sampling2d_2[0][0]
__________________________________________________________________________________________________
up_sampling2d_5 (UpSampling2D) (None, 256, 256, 64) 0 elu_39[0][0]
__________________________________________________________________________________________________
elu_10 (ELU) (None, 64, 64, 128) 0 conv2d_23[0][0]
__________________________________________________________________________________________________
elu_24 (ELU) (None, 128, 128, 64) 0 conv2d_37[0][0]
__________________________________________________________________________________________________
conv2d_53 (Conv2D) (None, 256, 256, 64) 36928 up_sampling2d_5[0][0]
__________________________________________________________________________________________________
conv2d_24 (Conv2D) (None, 64, 64, 128) 802944 elu_10[0][0]
__________________________________________________________________________________________________
conv2d_38 (Conv2D) (None, 128, 128, 64) 102464 elu_24[0][0]
__________________________________________________________________________________________________
elu_40 (ELU) (None, 256, 256, 64) 0 conv2d_53[0][0]
__________________________________________________________________________________________________
elu_11 (ELU) (None, 64, 64, 128) 0 conv2d_24[0][0]
__________________________________________________________________________________________________
elu_25 (ELU) (None, 128, 128, 64) 0 conv2d_38[0][0]
__________________________________________________________________________________________________
conv2d_54 (Conv2D) (None, 256, 256, 64) 36928 elu_40[0][0]
__________________________________________________________________________________________________
up_sampling2d_1 (UpSampling2D) (None, 256, 256, 128 0 elu_11[0][0]
__________________________________________________________________________________________________
up_sampling2d_3 (UpSampling2D) (None, 256, 256, 64) 0 elu_25[0][0]
__________________________________________________________________________________________________
elu_41 (ELU) (None, 256, 256, 64) 0 conv2d_54[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 256, 256, 256 0 up_sampling2d_1[0][0]
up_sampling2d_3[0][0]
elu_41[0][0]
__________________________________________________________________________________________________
conv2d_55 (Conv2D) (None, 256, 256, 16) 36880 concatenate_1[0][0]
__________________________________________________________________________________________________
elu_42 (ELU) (None, 256, 256, 16) 0 conv2d_55[0][0]
__________________________________________________________________________________________________
conv2d_56 (Conv2D) (None, 256, 256, 3) 435 elu_42[0][0]
__________________________________________________________________________________________________
elu_43 (ELU) (None, 256, 256, 3) 0 conv2d_56[0][0]
__________________________________________________________________________________________________
clip_1 (Clip) (None, 256, 256, 3) 0 elu_43[0][0]
==================================================================================================
Total params: 12,806,595
Trainable params: 12,806,595
Non-trainable params: 0
__________________________________________________________________________________________________
WARNING:tensorflow:From /content/inpainting-gmcnn-keras/utils/gaussian_utils.py:6: Normal.__init__ (from tensorflow.python.ops.distributions.normal) is deprecated and will be removed after 2019-01-01.
Instructions for updating:
The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/distributions/normal.py:160: Distribution.__init__ (from tensorflow.python.ops.distributions.distribution) is deprecated and will be removed after 2019-01-01.
Instructions for updating:
The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.
Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5
58892288/58889256 [==============================] - 1s 0us/step
2019-03-24 06:03:54.061782: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2300000000 Hz
2019-03-24 06:03:54.062244: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x91f6580 executing computations on platform Host. Devices:
2019-03-24 06:03:54.062286: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
2019-03-24 06:03:54.272229: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-03-24 06:03:54.272796: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x91f69a0 executing computations on platform CUDA. Devices:
2019-03-24 06:03:54.272837: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): Tesla K80, Compute Capability 3.7
2019-03-24 06:03:54.273259: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: 0000:00:04.0
totalMemory: 11.17GiB freeMemory: 11.10GiB
2019-03-24 06:03:54.273296: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-03-24 06:03:55.791555: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-24 06:03:55.791637: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-03-24 06:03:55.791666: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-03-24 06:03:55.792006: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2019-03-24 06:03:55.792130: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10754 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7)
INFO:tensorflow: #### Skipping random pooling ...
INFO:tensorflow: #### Skipping random pooling ...
INFO:tensorflow: #### pooling 65x65 out of 128x128
Found 36500 images belonging to 1 classes.
Found 36500 images belonging to 1 classes.
Found 12000 images belonging to 1 classes.
Epochs: 0% 0/5 [00:00<?, ?it/s]/usr/local/lib/python3.6/dist-packages/keras/engine/training.py:490: UserWarning: Discrepancy between trainable weights and collected trainable weights, did you set `model.trainable` without calling `model.compile` after ?
'Discrepancy between trainable weights and collected trainable'
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_grad.py:102: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
2019-03-24 06:04:12.870387: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally
Epochs: 0% 0/5 [00:20<?, ?it/s, epoch=0, generator_loss=0.79, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=1|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 0% 0/5 [13:46<?, ?it/s, epoch=0, generator_loss=0.22, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=1001|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 0% 0/5 [27:06<?, ?it/s, epoch=0, generator_loss=0.21, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=2001|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 0% 0/5 [40:25<?, ?it/s, epoch=0, generator_loss=0.13, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=3001|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 0% 0/5 [53:43<?, ?it/s, epoch=0, generator_loss=0.13, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=4001|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 0% 0/5 [1:07:00<?, ?it/s, epoch=0, generator_loss=0.17, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=5001|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 0% 0/5 [1:20:17<?, ?it/s, epoch=0, generator_loss=0.14, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=6001|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 0% 0/5 [1:33:33<?, ?it/s, epoch=0, generator_loss=0.11, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=7001|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 20% 1/5 [1:46:52<6:30:12, 5853.11s/it, epoch=1, generator_loss=0.12, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=701|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 20% 1/5 [2:00:10<6:30:12, 5853.11s/it, epoch=1, generator_loss=0.13, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=1701|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 20% 1/5 [2:13:28<6:30:12, 5853.11s/it, epoch=1, generator_loss=0.09, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=2701|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 20% 1/5 [2:26:48<6:30:12, 5853.11s/it, epoch=1, generator_loss=0.10, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=3701|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 20% 1/5 [2:40:10<6:30:12, 5853.11s/it, epoch=1, generator_loss=0.09, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=4701|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 20% 1/5 [2:53:37<6:30:12, 5853.11s/it, epoch=1, generator_loss=0.11, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=5701|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 20% 1/5 [3:07:05<6:30:12, 5853.11s/it, epoch=1, generator_loss=0.06, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=6701|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 40% 2/5 [3:20:25<4:52:40, 5853.58s/it, epoch=2, generator_loss=0.08, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=401|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 40% 2/5 [3:33:40<4:52:40, 5853.58s/it, epoch=2, generator_loss=0.08, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=1401|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 40% 2/5 [3:46:55<4:52:40, 5853.58s/it, epoch=2, generator_loss=0.07, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=2401|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 40% 2/5 [4:00:09<4:52:40, 5853.58s/it, epoch=2, generator_loss=0.15, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=3401|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 40% 2/5 [4:13:24<4:52:40, 5853.58s/it, epoch=2, generator_loss=0.08, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=4401|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 40% 2/5 [4:26:36<4:52:40, 5853.58s/it, epoch=2, generator_loss=0.10, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=5401|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 40% 2/5 [4:39:46<4:52:40, 5853.58s/it, epoch=2, generator_loss=0.08, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=6401|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 60% 3/5 [4:52:57<3:14:28, 5834.36s/it, epoch=3, generator_loss=0.06, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=101|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 60% 3/5 [5:06:06<3:14:28, 5834.36s/it, epoch=3, generator_loss=0.14, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=1101|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 60% 3/5 [5:19:14<3:14:28, 5834.36s/it, epoch=3, generator_loss=0.04, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=2101|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 60% 3/5 [5:32:28<3:14:28, 5834.36s/it, epoch=3, generator_loss=0.14, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=3101|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 60% 3/5 [5:45:41<3:14:28, 5834.36s/it, epoch=3, generator_loss=0.12, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=4101|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 60% 3/5 [5:58:53<3:14:28, 5834.36s/it, epoch=3, generator_loss=0.10, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=5101|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 60% 3/5 [6:12:01<3:14:28, 5834.36s/it, epoch=3, generator_loss=0.06, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=6101|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 60% 3/5 [6:25:09<3:14:28, 5834.36s/it, epoch=3, generator_loss=0.13, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=7101|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 80% 4/5 [6:38:16<1:36:54, 5814.82s/it, epoch=4, generator_loss=0.08, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=801|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 80% 4/5 [6:51:23<1:36:54, 5814.82s/it, epoch=4, generator_loss=0.10, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=1801|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 80% 4/5 [7:04:32<1:36:54, 5814.82s/it, epoch=4, generator_loss=0.05, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=2801|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 80% 4/5 [7:17:43<1:36:54, 5814.82s/it, epoch=4, generator_loss=0.11, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=3801|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 80% 4/5 [7:30:55<1:36:54, 5814.82s/it, epoch=4, generator_loss=0.06, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=4801|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 80% 4/5 [7:44:06<1:36:54, 5814.82s/it, epoch=4, generator_loss=0.09, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=5801|7300]INFO:tensorflow:Saved generator weights to: ./outputs/weights/gmcnn.h5
INFO:tensorflow:Saved global critic weights to: ./outputs/weights/global_critic.h5
INFO:tensorflow:Saved local critic weights to: ./outputs/weights/local_critic.h5
Epochs: 80% 4/5 [7:57:01<1:36:54, 5814.82s/it, epoch=4, generator_loss=0.06, global_discriminator_loss=0.00, local_discriminator_loss=0.00, step=6780|7300]
###Markdown
List predicted images for specific training steps and vizualise some of them
###Code
!ls outputs/predicted_pics/warm_up_generator/
from IPython.display import Image
Image('outputs/predicted_pics/warm_up_generator/step_2000.png')
Image('outputs/predicted_pics/warm_up_generator/step_32000.png')
###Output
_____no_output_____
###Markdown
Full Wasserstein GAN training mode: generator, local and global discriminators
###Code
!python inpainting-gmcnn-keras/runner.py --train_path images --mask_path mask -from_weights
!zip -r outputs.zip outputs/
ls
from google.colab import files
files.download("outputs.zip")
###Output
_____no_output_____ |
Notebooks/GalKin example notebook.ipynb | ###Markdown
GalKin example notebook configure GalKin
###Code
# light profile
light_profile = 'Hernquist'
kwargs_light = {'r_eff': 0.5} # effective half light radius (2d projected) in arcsec
# mass profile
mass_profile = 'power_law'
kwargs_profile = {'theta_E': 1.2, 'gamma': 2.2} # Einstein radius (arcsec) and power-law slope
# anisotropy profile
anisotropy_type = 'r_ani'
kwargs_anisotropy = {'r_ani': .5} # anisotropy radius [arcsec]
# aperture as shell
aperture_type = 'shell'
kwargs_aperture_inner = {'r_in': 0., 'r_out':0.2, 'center_dec': 0, 'center_ra':0}
kwargs_aperture_outer = {'r_in': 0., 'r_out':1.5, 'center_dec': 0, 'center_ra':0}
# aperture as slit
#aperture_type = 'slit'
#kwargs_aperture = {'length': 3.8, 'width': 0.9, 'center_ra': 0, 'center_dec': 0, 'angle': 0}
psf_fwhm = 0.7 # Gaussian FWHM psf
# redshifts
if False:
z_d = 0.745
z_s = 1.789
from lenstronomy.Cosmo.cosmo_properties import CosmoProp
cosmoProp = CosmoProp(z_lens=z_d, z_source=z_s)
D_d = cosmoProp.dist_OL
D_s = cosmoProp.dist_OS
D_ds = cosmoProp.dist_LS
else:
D_d = 1553.01805628
D_s = 1786.98950495
D_ds = 815.309150626
print D_d, D_s, D_ds
kwargs_cosmo = {'D_d': D_d, 'D_s': D_s, 'D_ds': D_ds}
from galkin.galkin import GalKin
galkin = GalKin(aperture=aperture_type, mass_profile=mass_profile, light_profile=light_profile, anisotropy_type=anisotropy_type, psf_fwhm=psf_fwhm, kwargs_cosmo=kwargs_cosmo)
sigma = galkin.vel_disp(kwargs_profile, kwargs_aperture_inner, kwargs_light, kwargs_anisotropy, num=1000)
print sigma
sigma = galkin.vel_disp(kwargs_profile, kwargs_aperture_outer, kwargs_light, kwargs_anisotropy, num=1000)
print sigma
###Output
_____no_output_____ |
src/3 - Quantum Teleportation/Quantum Teleportation.ipynb | ###Markdown
Quantum TeleportationContrary to the name, quantum teleportation doesn't actually teleport a qubit physically, but instead teleports the information. Regardless of the distance between the qubits, the information will be reflected on the other qubit instantly, and without any medium required in between (thanks to entanglement).Of course, this means that the required number of qubits already be present at the receiving end. Copying in the manner classical bits do is not possible, since that would measure the quantum state, effectively destroying the quantum state we're trying to copy.Since qubits start out in the `|0>` state, we'll `x` `q0` to make it a `|1>`.
###Code
import qiskit
from qiskit.tools import visualization
circuit = qiskit.QuantumCircuit(3, 3)
circuit.x(0)
circuit.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Since quantum teleportation requires that the qubits be entangled, we'll entangle `q1` and `q2`.
###Code
circuit.barrier()
circuit.h(1)
circuit.cx(1, 2)
circuit.draw(output='mpl')
###Output
_____no_output_____
###Markdown
According to the quantum teleportation protocol, we'll need to apply a controlled-NOT and Hadamard gate to `q0` and `q1`.
###Code
circuit.cx(0, 1)
circuit.h(0)
circuit.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Let's measure `q0` and `q1`.
###Code
circuit.barrier()
circuit.measure([0, 1], [0, 1])
circuit.draw(output='mpl')
###Output
_____no_output_____
###Markdown
At this point, it's apparently just math and physics, so there's nothing much to "understand" at a high level. We'll just have to apply a controlled-NOT gate, and a controlled-Z gate.
###Code
circuit.barrier()
circuit.cx(1, 2)
circuit.cz(0, 2)
circuit.draw(output='mpl')
###Output
_____no_output_____
###Markdown
And now let's measure the final output.
###Code
circuit.barrier()
circuit.measure(2, 2)
circuit.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Let's simulate and measure the circuit to verify that information was teleported from `q0` to `q2`.
###Code
result = qiskit.execute(circuit, backend=qiskit.Aer.get_backend('qasm_simulator'), shots=1024).result()
visualization.plot_histogram(result.get_counts())
###Output
_____no_output_____ |
Models/FCHarDNet/fchardnet.ipynb | ###Markdown
Including Dependencies
###Code
from tqdm.notebook import tqdm
import os
import random
import argparse
import numpy as np
from torch.utils import data
from time import perf_counter
import time
import torch
import torch.nn as nn
from threading import Thread
import IPython
import json
import os
from collections import namedtuple
import torch.utils.data as data
from PIL import Image
import numpy as np
from PIL import Image
import matplotlib
import matplotlib.pyplot as plt
from torch.utils.tensorboard import SummaryWriter
import pickle
import zipfile
import torch.nn as nn
import torch.nn.functional as Fun
import torch
import numpy as np
from sklearn.metrics import confusion_matrix
from torchvision import transforms
import requests
import cv2
import urllib
import IPython
from io import BytesIO
import matplotlib.pyplot as plt
import io
import pickle
###Output
_____no_output_____
###Markdown
Utility Functions
###Code
#Focal Loss Code Used For Semantic Segmentation, Reference Mentioned in Bottom of the Notebook
class FocalLoss(nn.Module):
def __init__(self, alpha=1, gamma=0, size_average=True, ignore_index=255):
super(FocalLoss, self).__init__()
self.alpha = alpha
self.gamma = gamma
self.ignore_index = ignore_index
self.size_average = size_average
def forward(self, inputs, targets):
ce_loss = Fun.cross_entropy(
inputs, targets, reduction='none', ignore_index=self.ignore_index)
pt = torch.exp(-ce_loss)
focal_loss = self.alpha * (1-pt)**self.gamma * ce_loss
if self.size_average:
return focal_loss.mean()
else:
return focal_loss.sum()
#Matrix For Evaluation, Reference Mentioned in Bottom of the Notebook
class _StreamMetrics(object):
def __init__(self):
""" Overridden by subclasses """
raise NotImplementedError()
def update(self, gt, pred):
""" Overridden by subclasses """
raise NotImplementedError()
def get_results(self):
""" Overridden by subclasses """
raise NotImplementedError()
def to_str(self, metrics):
""" Overridden by subclasses """
raise NotImplementedError()
def reset(self):
""" Overridden by subclasses """
raise NotImplementedError()
class StreamSegMetrics(_StreamMetrics):
"""
Stream Metrics for Semantic Segmentation Task
"""
def __init__(self, n_classes):
self.n_classes = n_classes
self.confusion_matrix = np.zeros((n_classes, n_classes))
def update(self, label_trues, label_preds):
#boolarr=label_trues==255
#label_preds[boolarr]=255
for lt, lp in zip(label_trues, label_preds):
self.confusion_matrix += self._fast_hist( lt.flatten(), lp.flatten() )
@staticmethod
def to_str(results):
string = "\n"
for k, v in results.items():
if k!="Class IoU":
string += "%s: %f\n"%(k, v)
#string+='Class IoU:\n'
#for k, v in results['Class IoU'].items():
# string += "\tclass %d: %f\n"%(k, v)
return string
def _fast_hist(self, label_true, label_pred):
mask = (label_true >= 0) & (label_true < self.n_classes)
hist = np.bincount(
self.n_classes * label_true[mask].astype(int) + label_pred[mask],
minlength=self.n_classes ** 2,
).reshape(self.n_classes, self.n_classes)
return hist
def get_results(self):
"""Returns accuracy score evaluation result.
- overall accuracy
- mean accuracy
- mean IU
- fwavacc
"""
hist = self.confusion_matrix
acc = np.diag(hist).sum() / hist.sum()
acc_cls = np.diag(hist) / hist.sum(axis=1)
acc_cls = np.nanmean(acc_cls)
iu = np.diag(hist) / (hist.sum(axis=1) + hist.sum(axis=0) - np.diag(hist))
mean_iu = np.nanmean(iu)
freq = hist.sum(axis=1) / hist.sum()
fwavacc = (freq[freq > 0] * iu[freq > 0]).sum()
cls_iu = dict(zip(range(self.n_classes), iu))
# cls_ac = dict(zip(range(self.n_classes),np.diag(hist)/(hist.sum(axis=1))))
return {
"Overall Acc": acc,
"Mean Acc": acc_cls,
"FreqW Acc": fwavacc,
"Mean IoU": mean_iu,
#"Class Acc": cls_ac,
"Class IoU": cls_iu,
}
def reset(self):
self.confusion_matrix = np.zeros((self.n_classes, self.n_classes))
class AverageMeter(object):
"""Computes average values"""
def __init__(self):
self.book = dict()
def reset_all(self):
self.book.clear()
def reset(self, id):
item = self.book.get(id, None)
if item is not None:
item[0] = 0
item[1] = 0
def update(self, id, val):
record = self.book.get(id, None)
if record is None:
self.book[id] = [val, 1]
else:
record[0]+=val
record[1]+=1
def get_results(self, id):
record = self.book.get(id, None)
assert record is not None
return record[0] / record[1]
_pil_interpolation_to_str = {
Image.NEAREST: 'PIL.Image.NEAREST',
Image.BILINEAR: 'PIL.Image.BILINEAR',
Image.BICUBIC: 'PIL.Image.BICUBIC',
Image.LANCZOS: 'PIL.Image.LANCZOS',
Image.HAMMING: 'PIL.Image.HAMMING',
Image.BOX: 'PIL.Image.BOX',
}
class ExtResize(object):
"""
Resize the input PIL Image to the given size.
Function Used to Resize both Image and Anotation both
"""
def __init__(self, size, interpolation=Image.BILINEAR):
#assert isinstance(size, int) or (isinstance(size, collections.Iterable) and len(size) == 2)
self.size = size
self.interpolation = interpolation
def __call__(self, img, lbl):
"""
Args:
img (PIL Image): Image to be scaled.
Returns:
PIL Image: Rescaled image.
"""
return F.resize(img, self.size, self.interpolation), F.resize(lbl, self.size, Image.NEAREST)
def __repr__(self):
interpolate_str = _pil_interpolation_to_str[self.interpolation]
return self.__class__.__name__ + '(size={0}, interpolation={1})'.format(self.size, interpolate_str)
###Output
_____no_output_____
###Markdown
Loading FCHarDNet Model
###Code
#This cell is used for loading Model From Script Model
os.chdir("/kaggle/input/script/")
!python utils.py
!python fchardnet.py
from utils import *
from fchardnet import hardnet
!ls
os.chdir("..")
os.chdir('../working')
###Output
_____no_output_____
###Markdown
DataLoader
###Code
class CustomData(data.Dataset):
colorMap={
"backgroud": (225,229 , 204),
'wall':(152, 152, 79),
'building':(70, 70, 70),
'sky':(70, 130, 180),
'sidewalk':(244, 35, 232),
'field/grass':(152, 251, 152),
'vegitation':(107, 142, 35),
'person': (220, 20, 60),
'mountain':(139, 218, 51),
'stairs':(202, 251, 254),
'bench':(108, 246, 107),
'pole':(153, 153, 153),
'car':(41, 34, 177),
'bike':(111, 34, 177),
'animal':(211, 205, 33),
'ground':(147, 147, 136),
'fence':(241, 170, 17),
'water':(29, 231, 229),
'road':(35, 18, 16),
'sign_board':(113, 97, 41),
'floor':(73, 23, 77),
'traffic_light':(225, 175, 57),
'ceeling':(51, 0, 0),
'unlabelled':(0,0,0),
}
#Mapping of Images for Custom Used
CustomDataSet = namedtuple('CustomDataSet', ['name', 'id', 'train_id', 'category', 'category_id',
'has_instances', 'ignore', 'color'])
classes = [
CustomDataSet('backgroud', 0, 0, 'obstacle', 0, False, True, colorMap['backgroud']),
CustomDataSet('wall', 1, 1, 'solid', 0, False, True, colorMap['wall']),
CustomDataSet('building', 2, 2, 'solid', 0, False, True, colorMap['building']),
CustomDataSet('sky', 3, 3, 'backgroud', 0, False, True, colorMap['sky']),
CustomDataSet('sidewalk', 4, 4, 'nature', 0, False, True, colorMap['sidewalk']),
CustomDataSet('field/grass', 5, 5, 'nature', 0, False, True, colorMap['field/grass']),
CustomDataSet('vegitation', 6, 6, 'nature', 0, False, True, colorMap['vegitation']),
CustomDataSet('person', 7, 7, 'human', 0, False, True, colorMap['person']),
CustomDataSet('mountain', 8, 8, 'nature', 0, False, True, colorMap['mountain']),
CustomDataSet('stairs', 9, 255, 'solid', 0, False, False, colorMap['stairs']),
CustomDataSet('bench', 10, 0, 'obstacle', 0, False, False, colorMap['bench']),
CustomDataSet('pole', 11, 0, 'obstacle', 0, False, False, colorMap['pole']),
CustomDataSet('car', 12, 9, 'vahicle', 0, False, True, colorMap['car']),
CustomDataSet('bike', 13, 10, 'vahicle', 0, False, True, colorMap['bike']),
CustomDataSet('animal', 14, 11, 'animal', 0, False, True, colorMap['animal']),
CustomDataSet('ground', 15, 12, 'land', 0, False, True, colorMap['ground']),
CustomDataSet('fence', 16, 13, 'solid', 0, False, True, colorMap['fence']),
CustomDataSet('water', 17, 14, 'land', 0, False, True, colorMap['water']),
CustomDataSet('road', 18, 15, 'land', 0, False, True, colorMap['road']),
CustomDataSet('sign_board', 19, 0, 'obstacle', 0, False, False, colorMap['sign_board']),
CustomDataSet('floor', 20, 4, 'land', 0, False, False, colorMap['floor']),
CustomDataSet('traffic_light', 21, 0,'obstacle', 0, False, False, colorMap['traffic_light']),
CustomDataSet('ceeling', 22, 16, 'ceeling', 0, False, True, colorMap['ceeling']),
CustomDataSet('unlabelled', 23, 255, 'void', 0, False, False, colorMap['unlabelled']),
]
#Numpy Array used for changing output to color Images
train_id_to_color = [c.color for c in classes if (c.ignore)]
train_id_to_color.append([0, 0, 0])
train_id_to_color = np.array(train_id_to_color)
#Gives Labels of Output
train_id_to_label= [c.name for c in classes if (c.ignore)]
train_id_to_label.append("unlabeled")
train_id_to_label = np.array(train_id_to_label)
id_to_train_id = np.array([c.train_id for c in classes])
def __init__(self, root_image, root_target, split='train', mode='fine', target_type='semantic', transform=None):
"""
Get All Images and Anotation paths from the List of
directories mentioned in root_images and root_target respestively and Store individual images and targets
paths to self.images and self.target List
"""
self.root_image = [os.path.expanduser(i) for i in root_image ]
self.root_target = [os.path.expanduser(i) for i in root_target]
self.images = []
self.targets = []
self.transform = transform
self.split = split
for i in range(len(self.root_image)):
self.root_image[i]=os.path.join(self.root_image[i],split)
self.root_target[i]=os.path.join(self.root_target[i],split)
lst=os.listdir(self.root_target[i])
for img in lst:
if img[0]=="C":
self.images.append(os.path.join(root_image[i],img[:-4]+'.jpg'))
else:
self.images.append(os.path.join(self.root_image[i],img[:-4]+'.jpg'))
self.targets.append(os.path.join(self.root_target[i],img))
@classmethod
def encode_target(cls, target):
"""
input: target image 2d matrix
output: 2d matrix with custom mapping
"""
return cls.id_to_train_id[np.array(target)]
@classmethod
def decode_target(cls, target):
"""
input: target 2d matrix
output: color map (RGB) image
"""
target[target == 255] = 0
#target = target.astype('uint8') + 1
return cls.train_id_to_color[target]
def __getitem__(self, index):
"""
input: index of an image and target file
Opens the Image from the paths
Apply Trasformation
Return images and target(Anotation)
"""
image = Image.open(self.images[index]).convert('RGB')
target = Image.open(self.targets[index])
if self.transform:
image, target = self.transform(image, target)
target = self.encode_target(target)
return image, target
def __len__(self):
return len(self.images)
###Output
_____no_output_____
###Markdown
Training Setup
###Code
from matplotlib import gridspec
def create_label_colormap():
colormap = CustomData.train_id_to_color
return colormap
def label_to_color_image(label):
if label.ndim != 2:
raise ValueError('Expect 2-D input label')
colormap = create_label_colormap()
if np.max(label) >= len(colormap):
raise ValueError('label value too large.')
return colormap[label]
def vis_segmentation(image,seg_map):
"""Visualizes input image, segmentation map and overlay view."""
plt.figure(figsize=(20, 4))
grid_spec = gridspec.GridSpec(1, 4, width_ratios=[6, 6, 6, 1])
plt.subplot(grid_spec[0])
plt.imshow(image)
plt.axis('off')
plt.title('input image')
plt.subplot(grid_spec[1])
seg_image = CustomData.decode_target(seg_map.cpu()).astype(np.uint8)
plt.imshow(seg_image)
plt.axis('off')
plt.title('segmentation map')
plt.subplot(grid_spec[2])
plt.imshow(image)
plt.imshow(seg_image, alpha=0.7)
plt.axis('off')
plt.title('segmentation overlay')
unique_labels = np.unique(seg_map)
print("Uniques Labels Found",unique_labels)
ax = plt.subplot(grid_spec[3])
plt.imshow(FULL_COLOR_MAP[unique_labels].astype(np.uint8), interpolation='nearest')
ax.yaxis.tick_right()
plt.yticks(range(len(unique_labels)), LABEL_NAMES[unique_labels])
plt.xticks([], [])
ax.tick_params(width=0.0)
plt.grid('off')
plt.show()
LABEL_NAMES = CustomData.train_id_to_label
FULL_LABEL_MAP = np.arange(len(LABEL_NAMES)).reshape(len(LABEL_NAMES), 1)
FULL_COLOR_MAP = label_to_color_image(FULL_LABEL_MAP)
##Get Datatset
def get_dataset(imageFolders,targetFolders,batchSize,valBatchSize):
""" Dataset And Augmentation """
train_transform = ExtCompose([
ExtResize((640,480)),
ExtColorJitter( brightness=0.5, contrast=0.5, saturation=0.5 ),
ExtRandomHorizontalFlip(),
ExtToTensor(),
ExtNormalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
])
val_transform = ExtCompose([
ExtResize((640,480)),
ExtToTensor(),
ExtNormalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
])
train_dst = CustomData(root_image=imageFolders,root_target=targetFolders,
split='training', transform=train_transform)
val_dst = CustomData(root_image=imageFolders,root_target=targetFolders,
split='validation', transform=val_transform)
train_loader = data.DataLoader(train_dst, batch_size=batchSize, shuffle=True, num_workers=4, pin_memory=True)
val_loader = data.DataLoader(val_dst, batch_size=valBatchSize, shuffle=True, num_workers=4)
return train_loader, val_loader
def GetPretrainedModel(device,model,num_classes=17, savedPath="/", pretrained=False, continue_training=False, optimizer=None, scheduler=None):
"""
If pretrained is True
Load Weights of Model, Optimizer, Scheduler, CurrentEpoch and BestScore from .pth file provided in savedPath
Load Loss and Score for continues training process
If pretraind is False
Load weights for trasfer learning(city scape weights)
Change the output layer with require classes
"""
cur_epoch=0
best_score=0
if pretrained:
model.finalConv= nn.Conv2d(48, num_classes, 1, 1, bias=True)
ckpt=saved_path+"last_model.pth"
checkpoint = torch.load(ckpt, map_location=torch.device('cpu'))
model.load_state_dict(checkpoint["model_state"])
#model = nn.DataParallel(model)
model.to(device)
with open(saved_path+"Loss.txt","rb") as File:
Loss = pickle.load(File)
with open(saved_path+"Score.txt","rb") as File:
Score = pickle.load(File)
if continue_training:
optimizer.load_state_dict(checkpoint["optimizer_state"])
scheduler.load_state_dict(checkpoint["scheduler_state"])
cur_epoch = checkpoint["epoch"]
best_score = checkpoint['best_score']
print("Training state restored from %s" % saved_path)
print("Model restored from %s" % saved_path)
del checkpoint # free memory
else:
model = torch.nn.DataParallel(model)
checkpoint=torch.load('../input/cityscape-mobilenet/hardnet70_cityscapes_model_2.pkl',map_location=torch.device('cpu'))
model.load_state_dict(checkpoint["model_state"])
model=model.module
model.finalConv= nn.Conv2d(48, num_classes, 1, 1, bias=True)
#model = nn.DataParallel(model)
model.to(device)
if continue_training:
return cur_epoch, best_score, optimizer, scheduler, model
return cur_epoch, best_score, model
##Validation Fucntion
def validate1(model,loader,metrics,device):
"""
validation function for calculating mIoU and IoU for each class using Matric
"""
metrics.reset()
with torch.no_grad():
for i, (images, labels) in enumerate(loader):
images = images.to(device, dtype=torch.float32)
labels = labels.to(device, dtype=torch.long)
outputs = model(images)
preds = outputs.detach().max(dim=1)[1].cpu().numpy()
targets=labels.cpu().numpy()
metrics.update(targets, preds)
return metrics.get_results()
def TrainModel(imageFolders,targetFolders,learningRate=0.00001,weightDecay=0.0001,learningRatePolicy='poly', noOfEpochs=27,
stepSize=10000, savedPath="/",pretrained=False, batchSize=16,valBatchSize=16, lossFunction="cross_entropy", useCuda=True):
"""
Geting train_loader and val_loader
Setting LossFunction, Optimizer and Scheduler
Loading weights and varible if pretrained is true
Training Loop: loading batch of images, pass to model, weights updation
Validation Function to check results
Saving Model and Other Required Files to display results
"""
continue_training=True
num_classes=17
train_loader, val_loader=get_dataset(imageFolders,targetFolders,batchSize,valBatchSize)
device = torch.device('cuda' if (torch.cuda.is_available() and useCuda==True) else 'cpu')
model = hardnet(19)
#Optimizer, Metric, scheduler and Loss Function
optimizer = torch.optim.SGD(model.parameters(), lr=learningRate, momentum=0.9, weight_decay=weightDecay)
metrics = StreamSegMetrics(num_classes)
if lossFunction=="focal":
criterion = FocalLoss(ignore_index=255, size_average=True)
else:
criterion = nn.CrossEntropyLoss(ignore_index=255, reduction='mean')
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=stepSize, gamma=0.1)
##pretraining Code
cur_epoch=0
best_score=0
Loss=[]
Score=[]
cur_epoch, best_score,optimizer,scheduler, model= GetPretrainedModel(model=model, device=device,num_classes=num_classes, savedPath=savedPath, pretrained=pretrained, continue_training=continue_training, optimizer=optimizer, scheduler=scheduler)
##Training Loop
print("Accuracy Of Model at epoch "+ str(cur_epoch)+ " is: "+ str(best_score))
interval_loss = 0
epochs=cur_epoch+noOfEpochs+1
max_score=10000000000
model_curve=[]
best_score = 0.0
cur_itrs=0
start = perf_counter()
for epoch in tqdm(range(cur_epoch+1,epochs), desc="Epochs"):
model.train()
running_loss = []
for step, (images, labels) in enumerate(tqdm(train_loader, desc="Training", leave=False)):
cur_itrs += 1
#print(images.shape)
#print(labels.shape)
images = images.to(device, dtype=torch.float32)
labels = labels.to(device, dtype=torch.long)
optimizer.zero_grad()
outputs = model(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
#end = perf_counter()
running_loss.append(loss.item())
interval_loss += loss.item()
model.eval()
val_score = validate1(model=model, loader=val_loader, metrics=metrics,device=device)
print("Training Loss",np.mean(running_loss),"Validation Loss")
print(metrics.to_str(val_score))
#if val_score['Mean IoU'] > best_score: # save best model
best_score = val_score['Mean IoU']
mkdir("./Files")
torch.save({
"epoch": epoch,
"model_state": model.state_dict(),
"optimizer_state": optimizer.state_dict(),
"scheduler_state": scheduler.state_dict(),
"best_score": best_score,
}, './Files/last_model.pth')
Loss.append(np.mean(running_loss))
Score.append(val_score)
with open("./Files/Loss.txt","wb") as File:
pickle.dump(Loss,File)
with open("./Files/Score.txt","wb") as File:
pickle.dump(Score,File)
scheduler.step()
print("Epoch: {}/{} - Loss: {:.4f}".format(epoch+1, epochs, np.mean(running_loss)))
end = perf_counter()
print("Time",end-start)
###Output
_____no_output_____
###Markdown
Training Function CallTo train a function, you must give the path of the folders which contains both training and validation folder in it.* imageFolders are the path of Images* targetFolders are the path of Groundtruth* noOfEpochs is the epochs* if pretrained=True then you must give savedPath=path for a pretrained weights* if GPU is available,use useCuda=True* Use batchSize for training batch size* Use valBatchSize for validation batch size* loss function could be 'focal' or 'entropy'* 255 is always the ignore index, which means it will not calculate the loss of 255 label* Your ground truth must have labels 0 to 16, and it can have 255 to ignore it
###Code
#Path for images directories
imageFolder= [
'../input/adek20-screen-parsing/ADEChallengeData2016/images',
'../input/adek20-screen-parsing/ADEChallengeData2016/images',
'../input/coco-dataset/Coco Stuff Dataset/images',
'../input/custom-sidewalk/custom_sidewalk_updated/customdataset',
]
#Path for target directories
targetFolder=[
'../input/adk-coco-filter/ADK_COCO_Filter_Anotation_Updated/adk',
'../input/adk-coco-filter/ADK_COCO_Filter_Anotation_Updated/adk_floor_filter',
'../input/adk-coco-filter/ADK_COCO_Filter_Anotation_Updated/coco',
'../input/custom-sidewalk/custom_sidewalk_updated/customannotations',
]
saved_path="../input/fdnet-results-16-classes/Files/"
TrainModel(imageFolders=imageFolder, targetFolders=targetFolder, savedPath=saved_path, pretrained=True,noOfEpochs=2)
###Output
_____no_output_____
###Markdown
Prediction Function Setup
###Code
def PredictImage(input_image, useCuda=True,num_classes=17,pretrained=False, saved_path="/" ):
"""
Passing Image to Model and Display its overlay and semantic output
"""
device = torch.device('cuda' if (torch.cuda.is_available() and useCuda==True) else 'cpu')
model = hardnet(19)
cur_epoch, best_score, model= GetPretrainedModel(model=model, device=device,num_classes=num_classes, savedPath=saved_path, pretrained=pretrained, continue_training=False, optimizer=None, scheduler=None)
print("Accuracy Of Model at epoch "+ str(cur_epoch)+ " is: "+ str(best_score))
preprocess = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor= preprocess(input_image)
input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model
if torch.cuda.is_available() and useCuda==True:
input_batch = input_batch.to('cuda')
model.eval()
with torch.no_grad():
start = perf_counter()
output = model(input_batch)[0]
end = perf_counter()
print(end-start)
output_predictions = output.argmax(0)
pred = CustomData.decode_target(output_predictions.cpu()).astype(np.uint8)
vis_segmentation(input_image, output_predictions.cpu())
###Output
_____no_output_____
###Markdown
Prediction Function Call* input_image: path of the image to predict* pretrained: True if model has pretrained weights* saved_path : path of pretrained weigths* useCuda: True if GPU is available* num_classes: no of classes your model has
###Code
saved_path="../input/fdnet-results-16-classes/Files/"
preprocess_input= transforms.Compose([
transforms.Resize((480,640)),
])
input_image = Image.open("../input/my-dataset/images/12.jpg")
input_image=preprocess_input(input_image)
PredictImage(input_image, pretrained=True,saved_path=saved_path)
###Output
_____no_output_____
###Markdown
Video Function Call* video_path: path of the video to predict* pretrained: True if model has pretrained weights* saved_path : path of pretrained weigths* useCuda: True if GPU is available* num_classes: no of classes your model has* num_frames: frames process from the entire video* output_path: path of resultant output (.mp4)
###Code
def PredictVideo(video_path,useCuda=True,num_classes=17,pretrained=False, num_frames = 20000 ,saved_path="result",output_path="results"):
"""
Function Used to Change a Video to Semantic Overlay Video
"""
device = torch.device('cuda' if (torch.cuda.is_available() and useCuda==True) else 'cpu')
model = hardnet(19)
cur_epoch, best_score, model= GetPretrainedModel(model=model, device=device,num_classes=num_classes, savedPath=saved_path, pretrained=pretrained, continue_training=False, optimizer=None, scheduler=None)
print("Accuracy Of Model at epoch "+ str(cur_epoch)+ " is: "+ str(best_score))
preprocess_input= transforms.Compose([
transforms.Resize((480,640)),])
preprocess = transforms.Compose([
transforms.Resize((480,640)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),])
np_buff=[]
def run_model(input_image):
input_tensor= preprocess(input_image)
input_batch = input_tensor.unsqueeze(0)
if torch.cuda.is_available():
input_batch = input_batch.to('cuda')
model.to('cuda')
model.eval()
with torch.no_grad():
output = model(input_batch)[0]
output_predictions = output.argmax(0)
return output_predictions
def vis_segmentation_stream(image, seg_map, index):
seg_image = CustomData.decode_target(seg_map.cpu()).astype(np.uint8)
seg_image=Image.fromarray(seg_image.astype('uint8'), 'RGB')
background = image.convert("RGBA")
overlay = seg_image.convert("RGBA")
new_img = Image.blend(background, overlay, 0.7)
img=np.array(new_img)
np_buff.append(img)
def run_visualization_video(frame, index):
original_im = Image.fromarray(frame[..., ::-1])
original_im=preprocess_input(original_im)
seg_map = run_model(original_im)
vis_segmentation_stream(original_im, seg_map, index)
def convert_to_video():
print("Video path", output_path)
print("Output Length", len(np_buff))
video=cv2.VideoWriter(output_path,cv2.VideoWriter_fourcc(*'DIVX'), 15,(np_buff[0].shape[1],np_buff[0].shape[0]))
for i in np_buff:
i=i[:,:,:-1]
video.write(i[:,:,::-1])
video.release()
print("Your result is saved as ",output_path)
if not os.path.isfile(video_path):
print('downloading the sample video...')
video_path = urllib.request.urlretrieve('https://github.com/lexfridman/mit-deep-learning/raw/master/tutorial_driving_scene_segmentation/mit_driveseg_sample.mp4')[0]
print('running deeplab on the sample video...')
print("Opening video ", video_path)
video = cv2.VideoCapture(video_path)
try:
start = perf_counter()
for i in range(num_frames):
_, frame = video.read()
if not _: break
if i%15==0:
run_visualization_video(frame, i)
end = perf_counter()
sec=end-start
print(sec, "seconds to process video")
convert_to_video()
except KeyboardInterrupt:
plt.close()
print("Stream stopped.")
output_path="result1.mp4"
video_path="../input/video-test/xyz.mp4"
saved_path="../input/fdnet-results-16-classes/Files/"
PredictVideo(saved_path=saved_path,pretrained=True,video_path=video_path,num_frames=500,output_path=output_path)
###Output
_____no_output_____ |
Object Tracking and Localization/Robot Localization/Multiple Moves/Multiple Movements, exercise.ipynb | ###Markdown
Multiple MovementsLet's see how our robot responds to moving multiple times without sensing! First let's include our usual resource imports and display function.
###Code
# importing resources
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
A helper function for visualizing a distribution.
###Code
def display_map(grid, bar_width=1):
if(len(grid) > 0):
x_labels = range(len(grid))
plt.bar(x_labels, height=grid, width=bar_width, color='b')
plt.xlabel('Grid Cell')
plt.ylabel('Probability')
plt.ylim(0, 1) # range of 0-1 for probability values
plt.title('Probability of the robot being at each cell in the grid')
plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1))
plt.show()
else:
print('Grid is empty')
###Output
_____no_output_____
###Markdown
QUIZ: Write code that moves 1000 times and then prints the resulting probability distribution.You are given the initial variables and a complete `move` function (that incorporates uncertainty), below.
###Code
# given initial variables
p=[0, 1, 0, 0, 0]
# the color of each grid cell in the 1D world
world=['green', 'red', 'red', 'green', 'green']
# Z, the sensor reading ('red' or 'green')
Z = 'red'
pHit = 0.6
pMiss = 0.2
pExact = 0.8
pOvershoot = 0.1
pUndershoot = 0.1
# Complete the move function
def move(p, U):
q=[]
# iterate through all values in p
for i in range(len(p)):
# use the modulo operator to find the new location for a p value
# this finds an index that is shifted by the correct amount
index = (i-U) % len(p)
nextIndex = (index+1) % len(p)
prevIndex = (index-1) % len(p)
s = pExact * p[index]
s = s + pOvershoot * p[nextIndex]
s = s + pUndershoot * p[prevIndex]
# append the correct, modified value of p to q
q.append(s)
return q
# Here is code for moving twice
p = move(p, 1)
p = move(p, 1)
print(p)
display_map(p)
## TODO: Write code for moving 1000 times
for i in range(1000):
p = move(p, 1)
if i%20==0:
print(p)
display_map(p)
###Output
_____no_output_____ |
DecisionTreeDemo.ipynb | ###Markdown
Using a Single Decision Tree for a ClassifierQuesto modello mostra come fittare e visualizzare un singolo albero di classificazione tramite ScikitLearn 0.21
###Code
# Demo printing and drawing of a decision Tree
# (C) Thimoty Barbieri [email protected] oct-2017
#from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
import pandas
import numpy as np
#import os
#os.chdir('C:/Users/thimo/Dropbox/corsi/machine_learning/real-world-machine-learning-master')
#os.chdir('./')
def tree_to_code(tree, feature_names):
tree_ = tree.tree_
feature_name = [
feature_names[i] if i != -2 else "undefined!"
for i in tree_.feature
]
print("def tree({}):".format(", ".join(feature_names)))
def recurse(node, depth):
indent = " " * depth
if tree_.feature[node] != -2:
name = feature_name[node]
threshold = tree_.threshold[node]
print("{}if {} <= {}:".format(indent, name, threshold))
recurse(tree_.children_left[node], depth + 1)
print("{}else: # if {} > {}".format(indent, name, threshold))
recurse(tree_.children_right[node], depth + 1)
else:
print("{}return {}".format(indent, tree_.value[node]))
recurse(0, 1)
def cat_to_num(data):
categories = np.unique(data)
features = {}
for cat in categories:
binary = (data == cat)
features["%s=%s" % (data.name, cat)] = binary.astype("int")
return pandas.DataFrame(features)
def prepare_data(data):
"""Takes a dataframe of raw data and returns ML model features
"""
# Initially, we build a model only on the available numerical values
features = data.drop(["PassengerId", "Survived", "Fare", "Name", "Sex", "Ticket", "Cabin", "Embarked"], axis=1)
# Setting missing age values to -1
features["Age"] = data["Age"].fillna(-1)
# Adding the sqrt of the fare feature
features["sqrt_Fare"] = np.sqrt(data["Fare"])
# Adding gender categorical value
features = features.join( cat_to_num(data['Sex']) )
# Adding Embarked categorical value
features = features.join( cat_to_num(data['Embarked'].fillna("")) )
return features
###Output
_____no_output_____
###Markdown
Importo i dati e separo la parte di train e di test
###Code
data = pandas.read_csv("https://raw.githubusercontent.com/thimotyb/real-world-machine-learning/master/data/titanic.csv")
data[:5]
data_train = data[:int(0.8*len(data))]
data_test = data[int(0.8*len(data)):]
###Output
_____no_output_____
###Markdown
Preparo i dati sistemando in OHE le variabili categoriche, uso il classificatore ad albero e fitto il modello
###Code
features = prepare_data(data_train)
features[:5]
model = tree.DecisionTreeClassifier(max_depth = 4)
model.fit(features, data_train["Survived"])
###Output
_____no_output_____
###Markdown
Verifico che il modello non sia in overfitting, e verifico l'accuratezza del modello
###Code
print(model.score(prepare_data(data_train), data_train["Survived"]))
model.score(prepare_data(data_test), data_test["Survived"])
###Output
0.8117977528089888
###Markdown
Stampo una versione testuale dell'albero per capire le feature utilizzate
###Code
tree_to_code(model, features.columns)
print("Score: {0}".format(model.score(prepare_data(data_test), data_test["Survived"])))
###Output
def tree(Pclass, Age, SibSp, Parch, sqrt_Fare, Sex=female, Sex=male, Embarked=, Embarked=C, Embarked=Q, Embarked=S):
if Sex=female <= 0.5:
if Pclass <= 1.5:
if Age <= 53.0:
if Age <= -0.03999999165534973:
return [[13. 3.]]
else: # if Age > -0.03999999165534973
return [[34. 31.]]
else: # if Age > 53.0
if Age <= 75.5:
return [[18. 2.]]
else: # if Age > 75.5
return [[0. 1.]]
else: # if Pclass > 1.5
if sqrt_Fare <= 2.8125420808792114:
if Age <= 20.75:
return [[61. 2.]]
else: # if Age > 20.75
return [[61. 9.]]
else: # if sqrt_Fare > 2.8125420808792114
if Age <= 13.0:
return [[40. 19.]]
else: # if Age > 13.0
return [[141. 21.]]
else: # if Sex=female > 0.5
if Pclass <= 2.5:
if sqrt_Fare <= 5.371784687042236:
if sqrt_Fare <= 5.313115835189819:
return [[ 4. 51.]]
else: # if sqrt_Fare > 5.313115835189819
return [[1. 0.]]
else: # if sqrt_Fare > 5.371784687042236
if Parch <= 1.5:
return [[ 0. 66.]]
else: # if Parch > 1.5
return [[ 2. 11.]]
else: # if Pclass > 2.5
if sqrt_Fare <= 4.980359792709351:
if Age <= 6.5:
return [[12. 33.]]
else: # if Age > 6.5
return [[31. 27.]]
else: # if sqrt_Fare > 4.980359792709351
if sqrt_Fare <= 5.597430229187012:
return [[10. 0.]]
else: # if sqrt_Fare > 5.597430229187012
return [[6. 2.]]
Score: 0.8212290502793296
###Markdown
Disegno l'albero (Scikit >= 0.21)
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=[30.0, 30.0])
tree.plot_tree(model, feature_names=features.columns)
###Output
_____no_output_____
###Markdown
###Code
# Demo printing and drawing of a decision Tree
# (C) Thimoty Barbieri [email protected] oct-2017
#from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
import pandas
import numpy as np
#import os
#os.chdir('C:/Users/thimo/Dropbox/corsi/machine_learning/real-world-machine-learning-master')
#os.chdir('./')
def tree_to_code(tree, feature_names):
tree_ = tree.tree_
feature_name = [
feature_names[i] if i != -2 else "undefined!"
for i in tree_.feature
]
print("def tree({}):".format(", ".join(feature_names)))
def recurse(node, depth):
indent = " " * depth
if tree_.feature[node] != -2:
name = feature_name[node]
threshold = tree_.threshold[node]
print("{}if {} <= {}:".format(indent, name, threshold))
recurse(tree_.children_left[node], depth + 1)
print("{}else: # if {} > {}".format(indent, name, threshold))
recurse(tree_.children_right[node], depth + 1)
else:
print("{}return {}".format(indent, tree_.value[node]))
recurse(0, 1)
def cat_to_num(data):
categories = np.unique(data)
features = {}
for cat in categories:
binary = (data == cat)
features["%s=%s" % (data.name, cat)] = binary.astype("int")
return pandas.DataFrame(features)
def prepare_data(data):
"""Takes a dataframe of raw data and returns ML model features
"""
# Initially, we build a model only on the available numerical values
features = data.drop(["PassengerId", "Survived", "Fare", "Name", "Sex", "Ticket", "Cabin", "Embarked"], axis=1)
# Setting missing age values to -1
features["Age"] = data["Age"].fillna(-1)
# Adding the sqrt of the fare feature
features["sqrt_Fare"] = np.sqrt(data["Fare"])
# Adding gender categorical value
features = features.join( cat_to_num(data['Sex']) )
# Adding Embarked categorical value
features = features.join( cat_to_num(data['Embarked'].fillna("")) )
return features
data = pandas.read_csv("https://raw.githubusercontent.com/thimotyb/real-world-machine-learning/master/data/titanic.csv")
data[:5]
data_train = data[:int(0.8*len(data))]
data_test = data[int(0.8*len(data)):]
features = prepare_data(data_train)
features[:5]
model = tree.DecisionTreeClassifier(max_depth = 4)
model.fit(features, data_train["Survived"])
tree_to_code(model, features.columns)
print("Score: {0}".format(model.score(prepare_data(data_test), data_test["Survived"])))
import matplotlib.pyplot as plt
plt.figure(figsize=[30.0, 30.0])
tree.plot_tree(model, feature_names=features.columns)
###Output
_____no_output_____
###Markdown
Using a Single Decision Tree for a ClassifierQuesto modello mostra come fittare e visualizzare un singolo albero di classificazione tramite ScikitLearn 0.21
###Code
# Demo printing and drawing of a decision Tree
# (C) Thimoty Barbieri [email protected] oct-2017
#from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
import pandas
import numpy as np
#import os
#os.chdir('C:/Users/thimo/Dropbox/corsi/machine_learning/real-world-machine-learning-master')
#os.chdir('./')
def tree_to_code(tree, feature_names):
tree_ = tree.tree_
feature_name = [
feature_names[i] if i != -2 else "undefined!"
for i in tree_.feature
]
print("def tree({}):".format(", ".join(feature_names)))
def recurse(node, depth):
indent = " " * depth
if tree_.feature[node] != -2:
name = feature_name[node]
threshold = tree_.threshold[node]
print("{}if {} <= {}:".format(indent, name, threshold))
recurse(tree_.children_left[node], depth + 1)
print("{}else: # if {} > {}".format(indent, name, threshold))
recurse(tree_.children_right[node], depth + 1)
else:
print("{}return {}".format(indent, tree_.value[node]))
recurse(0, 1)
def cat_to_num(data):
categories = np.unique(data)
features = {}
for cat in categories:
binary = (data == cat)
features["%s=%s" % (data.name, cat)] = binary.astype("int")
return pandas.DataFrame(features)
def prepare_data(data):
"""Takes a dataframe of raw data and returns ML model features
"""
# Initially, we build a model only on the available numerical values
features = data.drop(["PassengerId", "Survived", "Fare", "Name", "Sex", "Ticket", "Cabin", "Embarked"], axis=1)
# Setting missing age values to -1
features["Age"] = data["Age"].fillna(-1)
# Adding the sqrt of the fare feature
features["sqrt_Fare"] = np.sqrt(data["Fare"])
# Adding gender categorical value
features = features.join( cat_to_num(data['Sex']) )
# Adding Embarked categorical value
features = features.join( cat_to_num(data['Embarked'].fillna("")) )
return features
###Output
_____no_output_____
###Markdown
Importo i dati e separo la parte di train e di test
###Code
data = pandas.read_csv("https://raw.githubusercontent.com/thimotyb/real-world-machine-learning/master/data/titanic.csv")
data[:5]
data_train = data[:int(0.8*len(data))]
data_test = data[int(0.8*len(data)):]
###Output
_____no_output_____
###Markdown
Preparo i dati sistemando in OHE le variabili categoriche, uso il classificatore ad albero e fitto il modello
###Code
features = prepare_data(data_train)
features[:5]
model = tree.DecisionTreeClassifier(max_depth = 4)
model.fit(features, data_train["Survived"])
###Output
_____no_output_____
###Markdown
Verifico che il modello non sia in overfitting, e verifico l'accuratezza del modello
###Code
print(model.score(prepare_data(data_train), data_train["Survived"]))
model.score(prepare_data(data_test), data_test["Survived"])
###Output
0.8117977528089888
###Markdown
Stampo una versione testuale dell'albero per capire le feature utilizzate
###Code
tree_to_code(model, features.columns)
print("Score: {0}".format(model.score(prepare_data(data_test), data_test["Survived"])))
###Output
def tree(Pclass, Age, SibSp, Parch, sqrt_Fare, Sex=female, Sex=male, Embarked=, Embarked=C, Embarked=Q, Embarked=S):
if Sex=female <= 0.5:
if Pclass <= 1.5:
if Age <= 53.0:
if Age <= -0.03999999165534973:
return [[13. 3.]]
else: # if Age > -0.03999999165534973
return [[34. 31.]]
else: # if Age > 53.0
if Age <= 75.5:
return [[18. 2.]]
else: # if Age > 75.5
return [[0. 1.]]
else: # if Pclass > 1.5
if sqrt_Fare <= 2.8125420808792114:
if Age <= 20.75:
return [[61. 2.]]
else: # if Age > 20.75
return [[61. 9.]]
else: # if sqrt_Fare > 2.8125420808792114
if Age <= 13.0:
return [[40. 19.]]
else: # if Age > 13.0
return [[141. 21.]]
else: # if Sex=female > 0.5
if Pclass <= 2.5:
if sqrt_Fare <= 5.371784687042236:
if sqrt_Fare <= 5.313115835189819:
return [[ 4. 51.]]
else: # if sqrt_Fare > 5.313115835189819
return [[1. 0.]]
else: # if sqrt_Fare > 5.371784687042236
if Parch <= 1.5:
return [[ 0. 66.]]
else: # if Parch > 1.5
return [[ 2. 11.]]
else: # if Pclass > 2.5
if sqrt_Fare <= 4.980359792709351:
if Age <= 6.5:
return [[12. 33.]]
else: # if Age > 6.5
return [[31. 27.]]
else: # if sqrt_Fare > 4.980359792709351
if sqrt_Fare <= 5.597430229187012:
return [[10. 0.]]
else: # if sqrt_Fare > 5.597430229187012
return [[6. 2.]]
Score: 0.8212290502793296
###Markdown
Disegno l'albero (Scikit >= 0.21)
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=[30.0, 30.0])
tree.plot_tree(model, feature_names=features.columns)
###Output
_____no_output_____ |
Intro-To-Computer-Vision-1/1_1_Image_Representation/6_2. Standardizing the Data.ipynb | ###Markdown
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* Import resourcesBefore you get started on the project code, import the libraries and resources that you'll need.
###Code
import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
###Output
_____no_output_____
###Markdown
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored
###Code
# Image data directories
image_dir_training = "day_night_images/training/"
image_dir_test = "day_night_images/test/"
###Output
_____no_output_____
###Markdown
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```.
###Code
# Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
###Output
_____no_output_____
###Markdown
--- 1. Visualize the input images
###Code
# Print out 1. The shape of the image and 2. The image's label
# Select an image and its label by list index
image_index = 0
selected_image = IMAGE_LIST[image_index][0]
selected_label = IMAGE_LIST[image_index][1]
# Display image and data about it
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label: " + str(selected_label))
###Output
Shape: (458, 800, 3)
Label: day
###Markdown
2. Pre-process the DataAfter loading in each image, you have to standardize the input and output. Solution codeYou are encouraged to try to complete this code on your own, but if you are struggling or want to make sure your code is correct, there i solution code in the `helpers.py` file in this directory. You can look at that python file to see complete `standardize_input` and `encode` function code. For this day and night challenge, you can often jump one notebook ahead to see the solution code for a previous notebook! --- InputIt's important to make all your images the same size so that they can be sent through the same pipeline of classification steps! Every input image should be in the same format, of the same size, and so on. TODO: Standardize the input images* Resize each image to the desired input size: 600x1100px (hxw).
###Code
# This function should take in an RGB image and return a new, standardized version
def standardize_input(image):
## TODO: Resize image so that all "standard" images are the same size 600x1100 (hxw)
standard_im = cv2.resize(image, (1100, 600))
return standard_im
###Output
_____no_output_____
###Markdown
TODO: Standardize the outputWith each loaded image, you also need to specify the expected output. For this, use binary numerical values 0/1 = night/day.
###Code
# Examples:
# encode("day") should return: 1
# encode("night") should return: 0
def encode(label):
numerical_val = 0
## TODO: complete the code to produce a numerical label
if label=="day":
numerical_val = 1
return numerical_val
###Output
_____no_output_____
###Markdown
Construct a `STANDARDIZED_LIST` of input images and output labels.This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.This uses the functions you defined above to standardize the input and output, so those functions must be complete for this standardization to work!
###Code
def standardize(image_list):
# Empty image data array
standard_list = []
# Iterate through all the image-label pairs
for item in image_list:
image = item[0]
label = item[1]
# Standardize the image
standardized_im = standardize_input(image)
# Create a numerical label
binary_label = encode(label)
# Append the image, and it's one hot encoded label to the full, processed list of image data
standard_list.append((standardized_im, binary_label))
return standard_list
# Standardize all training images
STANDARDIZED_LIST = standardize(IMAGE_LIST)
###Output
_____no_output_____
###Markdown
Visualize the standardized dataDisplay a standardized image from STANDARDIZED_LIST.
###Code
# Display a standardized image and its label
# Select an image by index
image_num = 0
selected_image = STANDARDIZED_LIST[image_num][0]
selected_label = STANDARDIZED_LIST[image_num][1]
# Display image and data about it
## TODO: Make sure the images have numerical labels and are of the same size
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label [1 = day, 0 = night]: " + str(selected_label))
###Output
Shape: (600, 1100, 3)
Label [1 = day, 0 = night]: 1
|
msa/models/pytorch_review/sequential_class.ipynb | ###Markdown
Pytorch Sequential Class:- Sequential Class implement forward methods for us- You might not be able to customize forward feed
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import math
from collections import OrderedDict
torch.set_printoptions(linewidth=150)
train_set = torchvision.datasets.FashionMNIST(
root="./data",
train=True,
download=True,
transform=transforms.Compose([
transforms.ToTensor()
])
)
image, label = train_set[0]
image.shape
plt.imshow(image.squeeze(), cmap='gray')
train_set.classes
in_features = image.numel()
in_features
out_features = math.floor(in_features / 2)
out_features
out_classes = len(train_set.classes)
out_classes
network1 = nn.Sequential(
nn.Flatten(start_dim=1), # 28*28 = 784
nn.Linear(in_features, out_features),
nn.Linear(out_features, out_classes)
)
network1
image = image.unsqueeze(0)
image.shape
network1[1]
network1(image)
# Instead of Sequential, we can do OrderedDict: You can named you layer
layers = OrderedDict([
('flat', nn.Flatten(start_dim=1)),
('hidden', nn.Linear(in_features, out_features)),
('output', nn.Linear(out_features, out_classes))
])
network2 = nn.Sequential(layers)
network2
network2(image)
# Showed that both network provide same prediction
torch.manual_seed(50)
network1 = nn.Sequential(
nn.Flatten(start_dim=1), # 28*28 = 784
nn.Linear(in_features, out_features),
nn.Linear(out_features, out_classes)
)
torch.manual_seed(50)
layers = OrderedDict([
('flat', nn.Flatten(start_dim=1)),
('hidden', nn.Linear(in_features, out_features)),
('output', nn.Linear(out_features, out_classes))
])
network2 = nn.Sequential(layers)
print(f"Network 1: {network1(image)}")
print(f"Network 2: {network2(image)}")
# Third way to create network
torch.manual_seed(50)
network3 = nn.Sequential()
network3.add_module("flat", nn.Flatten(start_dim=1))
network3.add_module("hidden", nn.Linear(in_features, out_features))
network3.add_module("output", nn.Linear(out_features, out_classes))
network3
network1(image), network2(image), network3(image)
###Output
_____no_output_____
###Markdown
Builiding a Network Class
###Code
class Network(nn.Module):
def __init__(self):
super().__init__()
# Convolutional layers
self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5, stride=1) # in_channel = 1 = grayscale, hyperparam, hyperparam
self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5, stride=1) # we in crease the output channel when have extra conv layers
# Fully connected layers
self.fc1 = nn.Linear(in_features=12*4*4, out_features=120, bias=True) # we also shrink the number of features to number of class that we have
self.fc2 = nn.Linear(in_features = 120, out_features=60, bias=True)
self.out = nn.Linear(in_features = 60, out_features=10, bias=True)
def forward(self, t):
# input layer
t = t
# convolution 1, not
t = self.conv1(t)
t = F.relu(t) # operation do not use weight, unlike layers
t = F.max_pool2d(t, kernel_size=2, stride=2) # operation do not use weight, unlike layers
# convolution 2: => relu => maxpool
t = self.conv2(t)
# WHY do we need these 2 layers?
t = F.relu(t)
t = F.max_pool2d(t, kernel_size=2, stride=2) # how to determine these values?
# Transition from Conv to Linear will require flatten
t = t.reshape(-1, 12*4*4) # 4x4 = shape of reduce image (originally 28x28)
# linear 1:
t = self.fc1(t)
t = F.relu(t)
# linear 2:
t = self.fc2(t)
t = F.relu(t)
# output:
t = self.out(t)
return t
torch.manual_seed(50)
network = Network()
network
###Output
_____no_output_____
###Markdown
Building the same network using Sequential
###Code
# way 1 - Sequential - no need forward, contain ReLU, MaxPool
torch.manual_seed(50)
sequential1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Flatten(start_dim=1),
nn.Linear(in_features=12*4*4, out_features=120),
nn.ReLU(),
nn.Linear(in_features=120, out_features=60),
nn.ReLU(),
nn.Linear(in_features=60, out_features=10)
)
torch.manual_seed(50)
layers = OrderedDict([
('conv1', nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)),
('relu1', nn.ReLU()),
('maxpool1', nn.MaxPool2d(kernel_size=2, stride=2)),
('conv2', nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)),
('relu2', nn.ReLU()),
('maxpool2', nn.MaxPool2d(kernel_size=2, stride=2)),
('flatten', nn.Flatten(start_dim=1)),
('fc1', nn.Linear(in_features=12*4*4, out_features=120)),
('relu3', nn.ReLU()),
('fc2', nn.Linear(in_features=120, out_features=60)),
('relu4', nn.ReLU()),
('out', nn.Linear(in_features=60, out_features=10))
])
sequential2 = nn.Sequential(layers)
torch.manual_seed(50)
sequential3 = nn.Sequential()
sequential3.add_module('conv1', nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5))
sequential3.add_module('relu1', nn.ReLU())
sequential3.add_module('maxpool1', nn.MaxPool2d(kernel_size=2, stride=2))
sequential3.add_module('conv2', nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5))
sequential3.add_module('relu2', nn.ReLU())
sequential3.add_module('maxpool2', nn.MaxPool2d(kernel_size=2, stride=2))
sequential3.add_module('flatten', nn.Flatten(start_dim=1))
sequential3.add_module('fc1', nn.Linear(in_features=12*4*4, out_features=120))
sequential3.add_module('relu3', nn.ReLU())
sequential3.add_module('fc2', nn.Linear(in_features=120, out_features=60))
sequential3.add_module('relu4', nn.ReLU())
sequential3.add_module('out', nn.Linear(in_features=60, out_features=10))
sequential1
sequential2
sequential3
network(image), sequential1(image), sequential2(image), sequential3(image)
###Output
_____no_output_____ |
tutorials/tutorial__AperturePhotometry.ipynb | ###Markdown
1. Single Image Aperture Photometry
###Code
filename = "/sps/ztf/data/sci/2018/0221/304792/ztf_20180221304792_700353_zg_c01_o_q1_sciimg.fits"
from ztfimg import science
sci = science.ScienceQuadrant.from_filename(filename, use_dask=False)
sci.show( dataprop={"which":"dataclean"} )
###Output
_____no_output_____
###Markdown
1.1 sci.get_aperture() Basic, you input the `x` and `y` positionsIf you already the x,y positions for which you want the aperture, and say you want a radius of 5.5pixels
###Code
x,y = np.random.uniform(30,3000, size=(300,2)).T
flux, error, flag = sci.get_aperture(x,y, radius=5.5)
###Output
_____no_output_____
###Markdown
With background "ring"
###Code
x,y = np.random.uniform(30,3000, size=(300,2)).T
flux, error, flag = sci.get_aperture(x,y, radius=5.5,
bkgann=[5.5,7])
###Output
_____no_output_____
###Markdown
Providing multiple sizes at once(careful with the numpy Broadcasting rules for radius)
###Code
x,y = np.random.uniform(30,3000, size=(300,2)).T
flux, error, flag = sci.get_aperture(x,y,
radius=np.linspace(2,8,10)[:,None])
np.shape(flux)
###Output
_____no_output_____
###Markdown
The asdataframe optionto ease future use, the asdataframe option could be use to convert output arrays in DataFrame. This is quite useful with multiple radius are given- f_i flux of the i-th radius- f_i_e associated error- f_i_f associated flag
###Code
x,y = np.random.uniform(30,3000, size=(300,2)).T
dataout = sci.get_aperture(x,y, radius=np.linspace(2,8,10)[:,None], asdataframe=True)
dataout
###Output
_____no_output_____
###Markdown
Providing `RA`, `Dec` in place on `x` and `y`.Nothing simpler, just use the `system="radec"` option. Remark that this only affects the input centroid, the radius remains in pixels
###Code
# Let's create fake radec
x,y = np.random.uniform(30,3000, size=(300,2)).T
ra,dec = sci.xy_to_radec(x,y)
sci.get_aperture(ra,dec, radius=np.asarray([3, 5])[:,None], system="radec", asdataframe=True)
###Output
_____no_output_____
###Markdown
1.2 sci.getcat_aperture()Sometime it could be more convenient to input a catalog (DataFrame) in place of positions. `getcat_aperture()` does that for you. 1. It parses the catalog, 2. runs get_apetures() 3. join the result to a copy of the input catalog and returns if (if you want, see `join=True`)
###Code
catalog = sci.get_catalog(calibrators="gaia", extra=[], isolation=20)
###Output
_____no_output_____
###Markdown
Let's just use the isolated stars from the catalog
###Code
isolatedgaia = catalog[catalog["isolated"]][["ra","dec", "x","y", "gmag", "e_gmag"]]
isolatedgaia
# for a radius of 5, but could have used a array of radius.
outcat = sci.getcat_aperture(isolatedgaia, radius=np.atleast_1d(5), xykeys=["x","y"], system="xy")
outcat
###Output
_____no_output_____
###Markdown
`xykeys=["x","y"], system="xy"` is by default. It is shown here so you know it exists. remark that, since this calls inside get_aperture, you have all the get_aperture options. For instance, if your catalog don't have x, y by simply ra, dec do: `xykeys=["ra","dec"], system="radec"` *** 2. Multiple Image Aperture PhotometryYou can loop over multiple files, but we implemented a ImageCollection wrapper that simplifies things (especially if you are using dask)
###Code
datafile = pandas.read_hdf("/sps/ztf/data/storage/starflat/datafiles/starflat_20180221_zg_rcid0.h5")
files = datafile["filename"].values
files[:5]
from ztfimg import aperture
###Output
_____no_output_____
###Markdown
In this example we are only using 5 images with use_dask=False
###Code
apertures = aperture.AperturePhotometry.from_filenames(files[:5], use_dask=False)
###Output
_____no_output_____
###Markdown
The ImageCollection is stored in aperture.images
###Code
[l.split("/")[-1] for l in apertures.images.filenames]
###Output
_____no_output_____
###Markdown
2.1 get_aperture()You can run the same tools as before `get_aperture()` but then x (and y) should be a list x (y)
###Code
x,y = np.random.uniform(30,3000, size=(300, 5, 2)).T
dfs = apertures.get_aperture(x, y, radius=np.atleast_1d(5)[:,None], asdataframe=True)
###Output
_____no_output_____
###Markdown
Let's concat them keeping track of the input filename
###Code
basename = [l.split("/")[-1] for l in apertures.images.filenames]
basename
# remark that is this stored in apertures.basenames
pandas.concat(dfs, keys=basename)
###Output
_____no_output_____
###Markdown
2.2 getcat_aperture()Same as for get_aperture, input one catalog per image. This can easily be obtained with ImageCollection.get_catalog()
###Code
cats = apertures.images.get_catalog(calibrators="gaia", extra=[], isolation=20)
cats
apcat = apertures.getcat_aperture(cats, radius=5)
apcat
###Output
_____no_output_____
###Markdown
2.3 build_apcatalog()For convenience, the get_catalog and getcat_aperture() have been combined into on simple method `build_apcatalog()`It really only does: 1. cats = self.images.get_catalog() 2. return self.getcat_aperture(cats, radius) But that's quite easy to use !
###Code
apcat = apertures.build_apcatalog(radius= np.linspace(1,5,3),
calibrators="gaia", extra=[], isolation=20)
apcat
###Output
_____no_output_____
###Markdown
Using DaskThis happens when you load the `AperturePhotometry`
###Code
from dask.distributed import Client
client = Client()
apertures = aperture.AperturePhotometry.from_filenames(files[:50], use_dask=True)
apertures.basenames # This is known
apertures.images.images # But these are delayed !
d_apcat = apertures.build_apcatalog(radius= np.linspace(1,5,3),
calibrators="gaia", extra=[], isolation=20)
d_apcat # So this is delayed
apcat = d_apcat.compute() # Chheck your dask dashboard
apcat
###Output
_____no_output_____ |
work/cnn/.ipynb_checkpoints/5_2. Visualize Your Net-checkpoint.ipynb | ###Markdown
CNN for Classification---In this notebook, we define **and train** an CNN to classify images from the [Fashion-MNIST database](https://github.com/zalandoresearch/fashion-mnist). Load the [data](http://pytorch.org/docs/master/torchvision/datasets.html)In this cell, we load in both **training and test** datasets from the FashionMNIST class.
###Code
# our basic libraries
import torch
import torchvision
# data loading and transforming
from torchvision.datasets import FashionMNIST
from torch.utils.data import DataLoader
from torchvision import transforms
# The output of torchvision datasets are PILImage images of range [0, 1].
# We transform them to Tensors for input into a CNN
## Define a transform to read the data in as a tensor
data_transform = transforms.ToTensor()
# choose the training and test datasets
train_data = FashionMNIST(root='./data', train=True,
download=True, transform=data_transform)
test_data = FashionMNIST(root='./data', train=False,
download=True, transform=data_transform)
# Print out some stats about the training and test data
print('Train data, number of images: ', len(train_data))
print('Test data, number of images: ', len(test_data))
# prepare data loaders, set the batch_size
## TODO: you can try changing the batch_size to be larger or smaller
## when you get to training your network, see how batch_size affects the loss
batch_size = 20
train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=True)
# specify the image classes
classes = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
Visualize some training dataThis cell iterates over the training dataset, loading a random batch of image/label data, using `dataiter.next()`. It then plots the batch of images and labels in a `2 x batch_size/2` grid.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(batch_size):
ax = fig.add_subplot(2, batch_size/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
Define the network architectureThe various layers that make up any neural network are documented, [here](http://pytorch.org/docs/master/nn.html). For a convolutional neural network, we'll use a simple series of layers:* Convolutional layers* Maxpooling layers* Fully-connected (linear) layersYou are also encouraged to look at adding [dropout layers](http://pytorch.org/docs/stable/nn.htmldropout) to avoid overfitting this data.--- TODO: Define the NetDefine the layers of your **best, saved model from the classification exercise** in the function `__init__` and define the feedforward behavior of that Net in the function `forward`. Defining the architecture here, will allow you to instantiate and load your best Net.
###Code
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel (grayscale), 10 output channels/feature maps
# 3x3 square convolution kernel
self.conv1 = nn.Conv2d(1, 10, 3)
## TODO: Define the rest of the layers:
# include another conv layer, maxpooling layers, and linear layers
# also consider adding a dropout layer to avoid overfitting
## TODO: define the feedforward behavior
def forward(self, x):
# one activated conv layer
x = F.relu(self.conv1(x))
# final output
return x
###Output
_____no_output_____
###Markdown
Load a Trained, Saved ModelTo instantiate a trained model, you'll first instantiate a new `Net()` and then initialize it with a saved dictionary of parameters. This notebook needs to know the network architecture, as defined above, and once it knows what the "Net" class looks like, we can instantiate a model and load in an already trained network.You should have a trained net in `saved_models/`.
###Code
# instantiate your Net
net = Net()
# load the net parameters by name, uncomment the line below to load your model
# net.load_state_dict(torch.load('saved_models/model_1.pt'))
print(net)
###Output
Net(
(conv1): Conv2d(1, 10, kernel_size=(3, 3), stride=(1, 1))
)
###Markdown
Feature VisualizationTo see what your network has learned, make a plot of the learned image filter weights and the activation maps (for a given image) at each convolutional layer. TODO: Visualize the learned filter weights and activation maps of the convolutional layers in your trained NetChoose a sample input image and apply the filters in every convolutional layer to that image to see the activation map.
###Code
# As a reminder, here is how we got the weights in the first conv layer (conv1), before
weights = net.conv1.weight.data
w = weights.numpy()
print(w)
###Output
[[[[ 0.04752767 -0.1216577 -0.15158328]
[ 0.2521365 0.09702411 -0.20449114]
[ 0.06988743 0.00865737 0.0531238 ]]]
[[[-0.23107748 -0.07265747 0.07246837]
[ 0.22950807 0.32860252 -0.16815715]
[ 0.14321521 0.17250052 0.24964622]]]
[[[-0.18996248 -0.2519427 0.25682905]
[-0.3275975 -0.3325174 0.05502641]
[ 0.27562585 0.16772673 -0.19035558]]]
[[[-0.30419272 -0.2577514 0.07050017]
[-0.1318239 -0.28116232 -0.18166725]
[ 0.02859715 -0.12828752 -0.06197989]]]
[[[ 0.10293999 -0.13430433 -0.28918347]
[ 0.19804636 -0.0886049 0.0885798 ]
[-0.00524676 0.26774797 -0.31866702]]]
[[[-0.0457238 0.11802539 -0.20062074]
[-0.22150128 0.18413255 0.13732842]
[ 0.07573727 0.04150036 0.29611734]]]
[[[-0.18536425 0.17961249 -0.09465656]
[ 0.00570369 -0.30196106 -0.11854236]
[ 0.08420452 -0.21577525 0.10378453]]]
[[[-0.21734974 -0.11131875 0.12969151]
[-0.03339615 -0.00281438 -0.22645983]
[-0.14174446 0.166765 -0.29057992]]]
[[[ 0.11403275 0.20555297 0.12158892]
[-0.0834512 -0.31999406 -0.3298157 ]
[ 0.05782902 0.32936344 0.19357589]]]
[[[-0.22415039 -0.00557697 0.25779316]
[ 0.08063042 0.12392583 0.18501022]
[ 0.3169507 -0.00659132 0.1617705 ]]]]
|
_notebooks/2021-10-17-Tidy-Data-in-Python.ipynb | ###Markdown
Tidy Data in Python> A tutorial of converting a messy Excel spreadsheet into a tidy long-formatted Pandas DataFrame.- toc: false - badges: true- comments: true- categories: [data cleaning, pandas] In this post we will be walking through the process of converting a messy Excel worksheet into tidy data. According to Hadley Wickham's excellent book, [*R For Data Science*](https://r4ds.had.co.nz/index.html), tidy data follows three main principles:1. Each variable must have its own column.2. Each observation must have its own row.3. Each value must have its own cell.The initial work of organizing the data will pay dividends down the road as your data will be uniform and easier to work with.For this tutorial we will be using the [farm sector balance sheet](https://data.ers.usda.gov/reports.aspx?ID=17835) provided by the United States Department of Agriculture (USDA).  The USDA Excel shows a time series from 2014-2020 (with forecasted values for 2021). It is in a wide format with the variables (items in column 'A') representing rows instead of columns. Our goal will be to transform the dataframe into the below shape. We will have 5 columns:1. Year2. Balance item3. Amount4. Forecast (a boolean column indicating true if the amount is a forecast or historical data)5. Report dateThe last two columns (forecast and report date) may seem a little unnecessary. I included them because they will potentially be helpful keys if we were to include the report into a larger database. For instance, if we wanted to keep an archival database of all the farm sector balance sheets we could quickly identify observations with their report data. Additionally, I really like to indicate if the value is a forecast as it can lead into some interesting insights as to how their forecast changes over time and how it ends up performing to actual data. The first step is to load our packages and then the Excel data into a dataframe. All we need is Numpy and Pandas. We will use Pandas' `read_excel()` function to load the dataset. We'll pull data starting in row 3 of the Excel (we use `header=2` here because `read_excel()` is zero-index while the spreadsheet is indexed at 1) and we'll read just for the first table (29 rows). Immediately after loading the data we will pull the date of the report into a variable that will be helpful once we create the report date column.
###Code
import numpy as np
import pandas as pd
#hide
file_path = r"G:\My Drive\Data Analysis\Blog\Data\2021-10-17_farmsectorindicators_september2021.xlsx"
# File path will wherever you download/store the farm sector balance sheet Excel
farm_raw = pd.read_excel(file_path, sheet_name=0, header=2, nrows=25)
report_date = farm_raw.columns[4]
#hide_input
farm_raw.head()
###Output
_____no_output_____
###Markdown
As you tell the above dataset is pretty messy. The first thing we will want to do is make the first row (remember it's zero-indexed) the column names. After that we can drop rows and columns that have `NaN`s in them as well as the two columns that contain year-over-year percent change (we will be creating a separate dataframe in a different blog that just measures this).
###Code
farm_raw.columns = farm_raw.iloc[1]
# Remove rows that have NaNs in them
farm_raw = farm_raw = farm_raw.dropna(axis=0, how='all')
# Remove the one column that is an NaN
## The below code slices the DataFrame to include all columns do not include a null name
farm_raw = farm_raw.loc[:, farm_raw.columns.notnull()]
farm_raw = farm_raw.drop(columns=['2019 - 20', '2020 - 21F'])
#hide_input
farm_raw.head(10)
###Output
_____no_output_____
###Markdown
Let's set the first column as an index so it is a little easier to work with. Also, we can go ahead and delete the first 3 rows of the dataframe as they don't contain useful information.
###Code
# Since the first column (the one we want to be the index) does not have a name we will access it in a bit of a convuloted way
## We need to make the columns a list and then we can select the first column
farm_raw = farm_raw.set_index(list(farm_raw.columns[0]))
farm_raw = farm_raw.drop(index=farm_raw.index[:4])
# View the index
farm_raw.index
###Output
_____no_output_____
###Markdown
As we can see, the index items are messy with various letters preceeding the names and footnotes still present (e.g. '1/'). We will use a series of pandas string methods to clean up the that text column.
###Code
# Convert the index to lower case
farm_raw.index = farm_raw.index.str.lower()
# Using regular expressions to remove the lower-case row labels
# E.g. the 'a.' in 'a. Cash receipts'
## Since there is no str.remove function we will just replace the pattern we want to drop with an empty string
farm_raw.index = farm_raw.index.str.replace(r'[a-z]\.', '')
# Remove all the parentheses and the chartacters within them
# E.g. the '(a+b+c)' in 'g. Gross cash income (a+b+c)'
farm_raw.index = farm_raw.index.str.replace(r'\(([^\)]+)\)', '')
# Remove all the footnote labels
# E.g. the '2/' in 'Federal Government direct farm program payments'
farm_raw.index = farm_raw.index.str.replace(r'[1-9]/', '')
# Remove all commas
farm_raw.index = farm_raw.index.str.replace(',', '')
# Remove all the white space before and after the strong
farm_raw.index = farm_raw.index.str.strip()
# Replace spaces with underscores
farm_raw.index = farm_raw.index.str.replace(' ', '_')
###Output
_____no_output_____
###Markdown
With the columns cleaned up we can put the data into long format. The first thing we will do is transpose our dataframe (make the columns the rows and the rows the columns).
###Code
# Transpose will automatically make the columns an index so we'll reset the index so it remains a column
## This will allow us to melt the data frame (next step) easier
farm_raw = farm_raw.transpose().reset_index()
#hide_input
farm_raw.head()
###Output
_____no_output_____
###Markdown
Next we will melt the dataframe. This powerful function (`pd.melt()`) makes our current wide dataframe into a long dataframe. The [documentation](https://pandas.pydata.org/docs/reference/api/pandas.melt.html) describes it as this:"one or more columns are identifier variables (for our case the year column), while all other columns, considered measured variables are 'unpivoted' to the row axis, leaving just two non-identifier columns, 'variable' (measurement, e.g. 'cash_receipts') and 'value' (the balance values... the numbers)."
###Code
# The identifier variables will be the year column - which happens to be named '1'
farm_raw = pd.melt(frame=farm_raw, id_vars=[1])
#hide_input
farm_raw.head(20)
###Output
_____no_output_____
###Markdown
As you can see, our dataframe now consists of just three columns. We could have kept it unmelted and it would have technically been tidy. Each row was an observation and each column was a separate variable. However, we want to add two more columns for each observation - if it was a forecast and the date of the report it was associated with.First let's rename those columns that we already have.
###Code
farm_raw.columns = ['year', 'balance_item','value']
###Output
_____no_output_____
###Markdown
Next let's add a boolean column (true or false) to show whether the observation was a forecast or not. The Excel indicated forecasts by adding an "F" at the end of the date (2021F). We will use Numpy's `where` function to indicate False for columns that just have 4 numbers and True for everything else (columns with an "F" for forecast).
###Code
farm_raw['forecast'] = np.where(farm_raw['year'].str.contains('0000'), False, True)
###Output
_____no_output_____
###Markdown
Now we can add a column showing the report date (remember we pulled this earlier from the spreadsheet and saved it into a variable report_date).
###Code
farm_raw['report_date'] = report_date
#hide_input
farm_raw.head()
###Output
_____no_output_____
###Markdown
We're almost there! Lets dig a little deeper into our columns (variables) and see what data types they are.
###Code
# Use the dtypes method on the dataframe to see what type of data each column is
farm_raw.dtypes
###Output
_____no_output_____
###Markdown
Good thing we checked! Both our year column and value column are objects when we would want them to be integers and floats, respectively. This makes sense if you remember our original data set (especially since we never manually assigned the columns data types - a good habit I should admittedly get better with). There was likely some strings and floasts in the columns that the year and balance_item columns are derived from so they automatically got converted into objects. Luckily this is an easy fix.For the year column we have to get rid of the "F" for forecasted values, make sure its only 4 digits, and then convert it to an integer type.
###Code
# Some of the year values were read in as a float (since they weren't all one initial column they may have been read in as different types)
farm_raw['year'] = farm_raw['year'].astype(str)
# Remove all non-digits (D). This is meant to drop the 'F'
farm_raw['year'] = farm_raw['year'].str.replace('\D','')
# Only include the 4 numbers for a year
farm_raw['year'] = farm_raw['year'].str.slice(stop=4)
# Convert the column to an integer
farm_raw['year'] = farm_raw['year'].astype(int)
###Output
_____no_output_____
###Markdown
The value column is much easier. We can just convert it to a float using the above .astype() function.
###Code
farm_raw.value = farm_raw.value.astype(float)
farm_raw.dtypes
###Output
_____no_output_____
###Markdown
Much better! All our values are now datatypes we would expect. And with that, we've cleaned the data! There's still much more we can do. We can easily navigate and filter this dataframe with Pandas, add on previous reports from USDA, and create graphics. In the future I'll have a blog post that will show how we can easily create a corresponding dataframe that looks represents the data in year-over-year percent change - a valuable way to look at economic data.As a final step let's make the `farm_raw` into just `farm` and then take a look at our clean and tidy dataset!
###Code
farm = farm_raw
#hide_input
farm.head(20)
###Output
_____no_output_____ |
additional_reference_notebooks/mvp_notebook_daniel.ipynb | ###Markdown
MVP Notebook Daniel
###Code
import preprocessing
import wrangle
import model
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import confusion_matrix
from sklearn.metrics import recall_score
# ignore warnings
import warnings
warnings.simplefilter(action='ignore')
import os.path
from os import path
import re
df = preprocessing.get_model_df()
df
df = preprocessing.add_new_features(df)
def filter_top_cities(df):
df["city_state"] = df["city"] + "_" + df["state"]
city_mask = df.groupby("city_state").year.count()
city_mask = city_mask[city_mask == 15]
# apply city mask to shrink the df
def in_city_mask(x):
return x in city_mask
df = df[df.city_state.apply(in_city_mask)]
df = df.sort_values(["city", "state", "year"])
return df
df = filter_top_cities(df)
###Output
_____no_output_____
###Markdown
Adding the labeling
###Code
# # Using the Evolution Index as a label:
# # For values that are higher than 100% in evolution index.
# df["ei_label"] = np.where(df.ei > 1, 1, 0)
# using future data to create the labels
def labeling_future_data(df):
"""this function takes in a data frame and returns a boolean column that identifies
if a city_state_year is a market that should be entered"""
df["label_quantity_of_mortgages_pop_2y"] = (df.sort_values(["year"])
.groupby(["city", "state"])[["quantity_of_mortgages_pop"]]
.pct_change(2)
.shift(-2))
df["label_total_mortgage_volume_pop_2y"] = (df.sort_values(["year"])
.groupby(["city", "state"])[["total_mortgage_volume_pop"]]
.pct_change(2)
.shift(-2))
Q3 = df.label_quantity_of_mortgages_pop_2y.quantile(.75)
Q1 = df.label_quantity_of_mortgages_pop_2y.quantile(.25)
upper_fence_quantity = Q3 + ((Q3-Q1)*1.5)
upper_fence_quantity
Q3 = df.label_total_mortgage_volume_pop_2y.quantile(.75)
Q1 = df.label_total_mortgage_volume_pop_2y.quantile(.25)
upper_fence_volume = Q3 + ((Q3-Q1)*1.5)
upper_fence_volume
df['should_enter'] = (df.label_total_mortgage_volume_pop_2y > upper_fence_volume) | (df.label_quantity_of_mortgages_pop_2y > upper_fence_quantity)
return df
df = labeling_future_data(df)
df.should_enter.value_counts()
df.info()
from sklearn.model_selection import cross_val_score, train_test_split, GridSearchCV
def train_test_data(df):
train, test = train_test_split(df, train_size=.75, random_state=123, stratify = df["should_enter"])
return train, test
#__Main Pre-modeling function__#
def prep_data_for_modeling(df, features_for_modeling, label_feature):
# To avoid Nan's, I have removed all data from 2006 (because all the var's would be nan)
df_model = df[df.year > 2007]
# Create an observation id to reduce the chance of mistake's
df_model["observation_id"] = df_model.city + "_" + df_model.state + "_" + df_model.year.astype(str)
# select that features that we want to model, and use our observation id as the row id
features_for_modeling += ["observation_id"]
features_for_modeling += [label_feature]
data = df_model[features_for_modeling].set_index("observation_id")
train, test = train_test_data(data)
train = train.sort_values("observation_id")
test = test.sort_values("observation_id")
X_train = train.drop(columns=label_feature)
y_train = train[label_feature]
X_test = test.drop(columns=label_feature)
y_test = test[label_feature]
return X_train, y_train, X_test, y_test
features_for_modeling = ["quantity_of_mortgages_pop", "city_state_qty_delta_pop", "ei", "median_mortgage_amount_pop"]
label_feature = "should_enter"
X_train, y_train, X_test, y_test = prep_data_for_modeling(df, features_for_modeling, label_feature)
# Helper function used to updated the scaled arrays and transform them into usable dataframes
def return_values(scaler, train, test):
train_scaled = pd.DataFrame(scaler.transform(train), columns=train.columns.values).set_index([train.index.values])
test_scaled = pd.DataFrame(scaler.transform(test), columns=test.columns.values).set_index([test.index.values])
return scaler, train_scaled, test_scaled
# Linear scaler
def min_max_scaler(train, test):
scaler = MinMaxScaler().fit(train)
scaler, train_scaled, test_scaled = return_values(scaler, train , test)
return scaler, train_scaled, test_scaled
# Scaler is ready - in case we need it
scaler, train_scaled, test_scaled = min_max_scaler(X_train, X_test)
assert(train_scaled.shape[1] == test_scaled.shape[1])
train_scaled.head()
train_scaled.isnull().sum()
grid, df_result, best_model = model.run_decision_tree_cv(train_scaled, y_train)
grid, df_result, best_model = model.run_random_forest_cv(train_scaled, y_train)
grid, df_result, best_model = model.run_knn_cv(train_scaled, y_train)
###Output
{'n_neighbors': 3, 'weights': 'uniform', 'score': 0.12820512820512822}
###Markdown
---- Evaluation
###Code
grid, df_result, best_model = model.run_decision_tree_cv(train_scaled, y_train)
y_pred = best_model.predict(train_scaled)
labels = sorted(y_train.unique())
matrix = pd.DataFrame(confusion_matrix(y_train, y_pred), index = labels, columns = labels)
recall_score(y_train, y_pred)
print(matrix)
best_model.score(test_scaled, y_test)
y_pred = best_model.predict(test_scaled)
labels = sorted(y_train.unique())
matrix = pd.DataFrame(confusion_matrix(y_test, y_pred), index = labels, columns = labels)
recall_score(y_test, y_pred)
print(matrix)
best_model.score(train_scaled, y_train)
y_train
###Output
_____no_output_____
###Markdown
---- Prediction
###Code
model_df = preprocessing.get_model_df()
df["city_state"] = df["city"] + "_" + df["state"]
city_mask = df.groupby("city_state").year.count()
city_mask = city_mask[city_mask == 15]
# apply city mask to shrink the df
def in_city_mask(x):
return x in city_mask
df = df[df.city_state.apply(in_city_mask)]
df = preprocessing.add_new_features(df)
df = df.sort_values(["city", "state", "year"])
df.head()
features_for_predicting = ["quantity_of_mortgages_pop", "city_state_qty_delta_pop", "ei", "median_mortgage_amount_pop"]
predictions = df[(df.year == 2020) | (df.year == 2019)].groupby("city_state")[features_for_predicting].mean()
predictions
# Helper function used to updated the scaled arrays and transform them into usable dataframes
def return_values_prediction(scaler, df):
train_scaled = pd.DataFrame(scaler.transform(df), columns=df.columns.values).set_index([df.index.values])
return scaler, train_scaled
# Linear scaler
def min_max_scaler_prediction(df):
scaler = MinMaxScaler().fit(df)
scaler, df_scaled = return_values_prediction(scaler, df)
return scaler, df_scaled
scaler, predictions_scaled = min_max_scaler_prediction(predictions)
predictions["label"] = best_model.predict(predictions_scaled)
predictions
city = predictions.reset_index().city_state.str.split("_", n=1, expand=True)[0]
state = predictions.reset_index().city_state.str.split("_", n=1, expand=True)[1]
predictions = predictions.reset_index()
predictions["city"] = city
predictions["state"] = state
predictions
predictions.to_csv("predictions.csv")
plt.figure(figsize=(15,5))
ax = sns.barplot(data=predictions, x="city", y="ei", hue="label")
plt.title("What markets will look like in 2021, based on evolution index")
plt.xticks(rotation=45, ha="right")
plt.xlabel("City")
plt.ylabel("Evolution Index (%)")
new_labels = ['Markets to not enter', 'Markets to enter']
h, l = ax.get_legend_handles_labels()
ax.legend(h, new_labels)
plt.show()
###Output
_____no_output_____
###Markdown
Notes for improvement:* Calculate modeling by hand* use oversampling to increase number of positive occurences.* Look at the docs to stratify the data better in cross validation ----- ModelingWe will be using classification algorithms to predict what markets will be hot as of 2020/2021. This will help us create recommendations for the future, so that we know what market's will be worth investing resources and labor in, and what martek's are worth ignoring.We will be likely using the following features for modeling:```pythonfeatures_for_modeling = ["quantity_of_mortgages_pop", "city_state_qty_delta_pop", "ei", "median_mortgage_amount_pop"]```Our target variable (the variable we are trying to predict, will be:```pythonlabel_feature = "should_enter"```In this case, our positive case will be `should_enter_market`. When looking at our confusion matrix, and all of it's possible outcomes, it would likely look as follows:| Matrix | Actual Positive | Actual Negative ||--------|-----------------|-----------------|| Predicted Positive | `enter_market` | predicted `not_enter_market`, but really it was a hot market and a missed opportunity | | Predicted Negative | predicted `enter_market`, but really it was a cold market, and not worth investing | `not_enter_market`Traditionally, for a project like this one, we would have focus on reducing the number of `False_Positives`, because it would be far more expensive to the stakeholder if we predicted a city was going to be hot, they spend time and money, and their investment is not returned. However, because TestFit's business strategy and software deployment are all done online, with very little investment needed for traveling. This means that actually investing in a city is not costly at all. As such, we will optimize our models to reduce the number of `False_Negtives`, because we want to make sure we are not missing any potential markets that can be considered `hot markets` in 2020 and 2021.Given that we have a low number of `positive` labels in our data, we will have to do something called **Oversampling**. This is a practice use in the field to basically help the predictive model by calling attention to the postiive labels and their patterns. We will create duplicate positive values, so that the model becomes more effective at predicting these values.
###Code
def split_data(df, train_size=.75,random_state = 124):
train, test = train_test_split(df, train_size=train_size, random_state=random_state, stratify = df["should_enter"])
train, validate = train_test_split(train, train_size=train_size, random_state=random_state, stratify = train["should_enter"])
return train, validate, test
#__Main Pre-modeling function__#
def prep_data_for_modeling(df, features_for_modeling, label_feature):
# To avoid Nan's, I have removed all data from 2006 (because all the var's would be nan)
df_model = df[df.year > 2007]
# Create an observation id to reduce the chance of mistake's
df_model["observation_id"] = df_model.city + "_" + df_model.state + "_" + df_model.year.astype(str)
# select that features that we want to model, and use our observation id as the row id
features_for_modeling += ["observation_id"]
features_for_modeling += [label_feature]
data = df_model[features_for_modeling].set_index("observation_id")
train, validate, test = split_data(data)
train = train.sort_values("observation_id")
validate = validate.sort_values("observation_id")
test = test.sort_values("observation_id")
X_train = train.drop(columns=label_feature)
y_train = train[label_feature]
X_validate = validate.drop(columns=label_feature)
y_validate = validate[label_feature]
X_test = test.drop(columns=label_feature)
y_test = test[label_feature]
return X_train, X_validate, X_test, y_train, y_validate, y_test
def return_values(scaler, train, validate, test):
'''
Helper function used to updated the scaled arrays and transform them into usable dataframes
'''
train_scaled = pd.DataFrame(scaler.transform(train), columns=train.columns.values).set_index([train.index.values])
validate_scaled = pd.DataFrame(scaler.transform(validate), columns=validate.columns.values).set_index([validate.index.values])
test_scaled = pd.DataFrame(scaler.transform(test), columns=test.columns.values).set_index([test.index.values])
return scaler, train_scaled, validate_scaled, test_scaled
# Linear scaler
def min_max_scaler(train,validate, test):
'''
Helper function that scales that data. Returns scaler, as well as the scaled dataframes
'''
scaler = MinMaxScaler().fit(train)
scaler, train_scaled, validate_scaled, test_scaled = return_values(scaler, train, validate, test)
return scaler, train_scaled, validate_scaled, test_scaled
df = preprocessing.get_model_df()
df = preprocessing.add_new_features(df)
df = filter_top_cities(df)
df = labeling_future_data(df)
df = df.append(df[df.should_enter])
df = df.append(df[df.should_enter])
df = df.append(df[df.should_enter])
# What percent of the data is positive?
(df.should_enter).mean()
features_for_modeling = ["quantity_of_mortgages_pop", "city_state_qty_delta_pop", "ei", "median_mortgage_amount_pop"]
label_feature = "should_enter"
X_train, X_validate, X_test, y_train, y_validate, y_test = prep_data_for_modeling(df, features_for_modeling, label_feature)
scaler, train_scaled, validate_scaled, test_scaled = min_max_scaler(X_train, X_validate, X_test)
train_scaled
###Output
_____no_output_____
###Markdown
Decision Tree
###Code
predictions = pd.DataFrame({"actual": y_train, "baseline": y_train.mode()[0]})
for i in range(1, 20):
clf, y_pred = model.run_clf(train_scaled, y_train, i)
score = clf.score(train_scaled, y_train)
validate_score = clf.score(validate_scaled, y_validate)
_, _, report = model.accuracy_report(clf, y_pred, y_train)
recall_score = report["True"].recall
print(f"Max_depth = {i}, accuracy_score = {score:.2f}. validate_score = {validate_score:.2f}, recall = {recall_score:.2f}")
clf, y_pred = model.run_clf(train_scaled, y_train, 4)
predictions["decision_tree"] = y_pred
accuracy_score, matrix, report = model.accuracy_report(clf, y_pred, y_train)
print(accuracy_score)
print(matrix)
report
coef = clf.feature_importances_
# We want to check that the coef array has the same number of items as there are features in our X_train dataframe.
assert(len(coef) == train_scaled.shape[1])
coef = clf.feature_importances_
columns = train_scaled.columns
df = pd.DataFrame({"feature": columns,
"feature_importance": coef,
})
df = df.sort_values(by="feature_importance", ascending=False)
sns.barplot(data=df, x="feature_importance", y="feature", palette="Blues_d")
plt.title("What are the most influencial features?")
###Output
_____no_output_____
###Markdown
Interestingly, it seems that when it comes to decision tree, the `evolution_index` is actually the most indicative feature, along side the change in number of mortgage's approved. The total `quantity_of_mortgages_pop` doesn't seem to be as influencial in the predictions. Random Forest
###Code
for i in range(1, 20):
rf, y_pred = model.run_rf(train_scaled, y_train, 1, i)
score = rf.score(train_scaled, y_train)
validate_score = rf.score(validate_scaled, y_validate)
_, _, report = model.accuracy_report(rf, y_pred, y_train)
recall_score = report["True"].recall
print(f"Max_depth = {i}, accuracy_score = {score:.2f}. validate_score = {validate_score:.2f}, recall = {recall_score:.2f}")
rf, y_pred = model.run_rf(train_scaled, y_train, 1, 3)
predictions["random_forest"] = y_pred
accuracy_score, matrix, report = model.accuracy_report(rf, y_pred, y_train)
print(accuracy_score)
print(matrix)
report
coef = rf.feature_importances_
columns = X_train.columns
df = pd.DataFrame({"feature": columns,
"feature_importance": coef,
})
df = df.sort_values(by="feature_importance", ascending=False)
sns.barplot(data=df, x="feature_importance", y="feature", palette="Blues_d")
plt.title("What are the most influencial features?")
###Output
_____no_output_____
###Markdown
Interestingly, for the random_forest model, the delta of the number of loans approved by city where the most important or influencial indicator of whether a city would be `a hot martket` or not. The evolution index was the second most influencial feature. Again, the total `quantity_of_morgages_pop` was the least influencial feature. KNN
###Code
for i in range(1, 20):
knn, y_pred = model.run_knn(train_scaled, y_train, i)
score = knn.score(train_scaled, y_train)
validate_score = knn.score(validate_scaled, y_validate)
_, _, report = model.accuracy_report(knn, y_pred, y_train)
recall_score = report["True"].recall
print(f"Max_depth = {i}, accuracy_score = {score:.2f}. validate_score = {validate_score:.2f}, recall = {recall_score:.2f}")
knn, y_pred = model.run_knn(train_scaled, y_train, 2)
predictions["knn"] = y_pred
accuracy_score, matrix, report = model.accuracy_report(knn, y_pred, y_train)
print(accuracy_score)
print(matrix)
report
# How do the different models compare on accuracy?
print("Accuracy Scores")
print("---------------")
for i in range(predictions.shape[1]):
report = model.create_report(predictions.actual, predictions.iloc[:,i])
print(f'{predictions.columns[i].title()} = {report.accuracy[0]:.2f}')
# How do the different models compare on recall?
print("Recall Scores")
print("---------------")
for i in range(predictions.shape[1]):
report = model.create_report(predictions.actual, predictions.iloc[:,i])
print(f'{predictions.columns[i].title()} = {report["True"].loc["recall"]:.2f}')
# How do the different models compare on recall?
print("Precision Scores")
print("---------------")
for i in range(predictions.shape[1]):
report = model.create_report(predictions.actual, predictions.iloc[:,i])
print(f'{predictions.columns[i].title()} = {report["True"].loc["precision"]:.2f}')
###Output
Precision Scores
---------------
Actual = 1.00
Baseline = 0.59
Decision_Tree = 0.76
Random_Forest = 0.78
Knn = 1.00
###Markdown
Conclusion:Overall, we see that because we have optimized for *recall*, the accuracy scores are a bit lower than expected. However, our recall scores are really good. We will choose the KNN model as the most effective model, given that it consistently achieved the best scores (for accuracy, recall and precision). Evaluate
###Code
rf, y_pred = model.run_rf(train_scaled, y_train, 1, 3)
y_pred = rf.predict(test_scaled)
accuracy_score, matrix, report = model.accuracy_report(rf, y_pred, y_test)
print(accuracy_score)
print(matrix)
report
###Output
Accuracy on dataset: 0.75
False True
False 45 27
True 16 86
###Markdown
---
###Code
knn, y_pred = model.run_knn(train_scaled, y_train, 2)
y_pred = knn.predict(test_scaled)
accuracy_score, matrix, report = model.accuracy_report(knn, y_pred, y_test)
print(accuracy_score)
print(matrix)
report
###Output
Accuracy on dataset: 0.90
False True
False 55 17
True 0 102
###Markdown
---- Prediction
###Code
df = preprocessing.get_model_df()
df = preprocessing.add_new_features(df)
df = filter_top_cities(df)
df.head()
features_for_predicting = ["quantity_of_mortgages_pop", "city_state_qty_delta_pop", "ei", "median_mortgage_amount_pop"]
predictions = df[(df.year == 2020) | (df.year == 2019)].groupby("city_state")[features_for_predicting].mean()
predictions
# Helper function used to updated the scaled arrays and transform them into usable dataframes
def return_values_prediction(scaler, df):
train_scaled = pd.DataFrame(scaler.transform(df), columns=df.columns.values).set_index([df.index.values])
return scaler, train_scaled
# Linear scaler
def min_max_scaler_prediction(df):
scaler = MinMaxScaler().fit(df)
scaler, df_scaled = return_values_prediction(scaler, df)
return scaler, df_scaled
scaler, predictions_scaled = min_max_scaler_prediction(predictions)
predictions["label"] = rf.predict(predictions_scaled)
predictions
city = predictions.reset_index().city_state.str.split("_", n=1, expand=True)[0]
state = predictions.reset_index().city_state.str.split("_", n=1, expand=True)[1]
predictions = predictions.reset_index()
predictions["city"] = city
predictions["state"] = state
predictions
predictions[predictions.label == True]
predictions.to_csv("predictions.csv")
plt.figure(figsize=(15,5))
ax = sns.barplot(data=predictions, x="city", y="ei", hue="label")
plt.title("What markets will look like in 2021, based on evolution index")
plt.xticks(rotation=45, ha="right")
plt.xlabel("City")
plt.ylabel("Evolution Index (%)")
new_labels = ['Markets to not enter', 'Markets to enter']
h, l = ax.get_legend_handles_labels()
ax.legend(h, new_labels)
plt.show()
###Output
_____no_output_____ |
src/my_scrape.ipynb | ###Markdown
My StackOverflow Question Saver[https://www.dataquest.io/blog/web-scraping-tutorial-python/](https://www.dataquest.io/blog/web-scraping-tutorial-python/) `import` section
###Code
import requests
from bs4 import BeautifulSoup
import pandas as pd
import html2markdown
###Output
_____no_output_____
###Markdown
BeautifulSoup section
###Code
url = "https://stackoverflow.com/questions/11465555/can-we-use-xpath-with-beautifulsoup"
page = requests.get(url)
page
soup = BeautifulSoup(page.content, 'html.parser')
print(soup.prettify())
list(soup.children)
[type(item) for item in list(soup.children)]
html = list(soup.children)[2]
list(html.children)
body = list(html.children)[3]
list(body.children)
p = list(body.children)[1]
p.get_text()
###Output
_____no_output_____
###Markdown
findind all instances of a tag at once
###Code
soup = BeautifulSoup(page.content, 'html.parser')
soup.find_all('p')
soup.find_all('p')[0].get_text()
###Output
_____no_output_____
###Markdown
if you only want to find the first instance of a tags
###Code
soup.find('p').get_text()
###Output
_____no_output_____
###Markdown
Searching for tags by class and id
###Code
main_div = soup.find(id="question")
main_div_p = main_div.find_all("p", class_=None)
main_div_p
###Output
_____no_output_____
###Markdown
Get `question_title` ❔📜
###Code
question_header = soup.find(id="question-header")
title_text = question_header.find("a")
question_title = title_text.text
question_title
###Output
_____no_output_____
###Markdown
Get question text ❓ 📄
###Code
post = soup.find("div", {"itemprop": "text"}).renderContents()
question_md = html2markdown.convert(post)
print(question_md)
###Output
_____no_output_____
###Markdown
Get answer
###Code
answer_div = soup.find("div", {"itemprop":"acceptedAnswer"})
answer_div_test = soup.find("div", {"itemprop":"acceptedAnswer"}).renderContents()
answer_md = html2markdown.convert(answer_div)
answer_md = html2markdown.convert(answer_div_test)
print(answer_md)
answer_div_test
###Output
_____no_output_____
###Markdown
Using CSS Selectors
###Code
soup2.select("div p")
###Output
_____no_output_____
###Markdown
Downloading weather data
###Code
url_sf = "https://forecast.weather.gov/MapClick.php?lat=37.7772&lon=-122.4168#.XpbTnOiJLDf"
url_indy = "https://forecast.weather.gov/MapClick.php?lat=39.7669&lon=-86.15#.XpbDe-iJLDc"
wpage = requests.get(url_sf)
soup = BeautifulSoup(wpage.content, 'html.parser')
seven_day = soup.find(id="seven-day-forecast")
forecast_items = seven_day.find_all(class_="tombstone-container")
tonight = forecast_items[0]
print(tonight.prettify())
period = tonight.find(class_="period-name").get_text()
short_desc = tonight.find(class_="short-desc").get_text()
temp = tonight.find(class_="temp").get_text()
print(period)
print(short_desc)
print(temp)
img = tonight.find("img")
desc = img['title']
print(desc)
###Output
_____no_output_____
###Markdown
Extracting all the information from the page
###Code
period_tags = seven_day.select(".tombstone-container .period-name")
periods = [pt.get_text() for pt in period_tags]
periods
short_descs = [sd.get_text() for sd in seven_day.select(".tombstone-container .short-desc")]
temps = [t.get_text() for t in seven_day.select(".tombstone-container .temp")]
descs = [d["title"] for d in seven_day.select(".tombstone-container img")]
print(short_descs)
print(temps)
print(descs)
###Output
_____no_output_____
###Markdown
Combining our data into a Pandas Dataframe
###Code
weather = pd.DataFrame({
"period":periods,
"short_desc": short_descs,
"temp": temps,
"desc": descs
})
weather
###Output
_____no_output_____ |
trainingDataGenerate/trainingDataGenerate.ipynb | ###Markdown
This file takes Ground Truth images to label cell boundaries and interiors for trainingThe inputs are 3D GT images labeled with unique integers that represent each cell objectThe outputs are nii images includebackground voxels = 0, cell interior = 1, cell boundary = 2.out put files are saved in ./output/boundary_interior_output
###Code
import numpy as np
import matplotlib
import os
import skimage.segmentation as seg
from skimage.io import imsave, imread
import nibabel as nib
import pathlib
###Output
_____no_output_____
###Markdown
change image directory to load images
###Code
# an example
image_dir = os.path.join('data', 'singlePopulation')
for img in os.listdir(image_dir):
image = imread(os.path.join(image_dir, img))
print(image.shape)
image_size_x, image_size_y = image[1].shape
# make a semantic mask
semantic_masks = np.zeros(( len(image), image_size_x, image_size_y)) #, dtype = K.floatx())
# print(semantic_masks.shape)
# print(images.shape)
edges = seg.find_boundaries(image, mode = 'thick')
interior = 2*(image > 0)
semantic_mask = edges + interior
semantic_mask[semantic_mask == 3] = 1
# Swap category names - edges category 2, interior category 1, background category 0
semantic_mask_temp = np.zeros(semantic_mask.shape, dtype = 'int')
semantic_mask_temp[semantic_mask == 1] = 2
semantic_mask_temp[semantic_mask == 2] = 1
semantic_mask = semantic_mask_temp
print(semantic_masks.shape)
# save as nii
binary_seg = np.transpose(semantic_mask)
# make nifty images
bseg = nib.Nifti1Image(binary_seg.astype(np.uint32), affine=np.eye(4))
output_path= os.path.join('output', 'boundary_interior_output')
pathlib.Path(output_path).mkdir(parents=True, exist_ok=True) # create directory if neccessary
new_name = img.replace('.tif','')
# change the saved file names
nib.nifti1.save(bseg, os.path.join(output_path, new_name + '.nii'))
###Output
(120, 220, 220)
(120, 220, 220)
(117, 214, 214)
(117, 214, 214)
(116, 213, 213)
(116, 213, 213)
(115, 210, 210)
(115, 210, 210)
(133, 244, 244)
(133, 244, 244)
(118, 216, 216)
(118, 216, 216)
(119, 218, 218)
(119, 218, 218)
(139, 255, 255)
(139, 255, 255)
(119, 218, 218)
(119, 218, 218)
(122, 223, 223)
(122, 223, 223)
|
notebook/aio35.ipynb | ###Markdown
asyncio IO Loop Create an event loop (which automatically becomes the default event loop in the context).
###Code
import asyncio
loop = asyncio.get_event_loop()
###Output
_____no_output_____
###Markdown
Run a simple callback as soon as possible:
###Code
def hello_world():
print('Hello World!')
loop.stop()
loop.call_soon(hello_world)
loop.run_forever()
###Output
Hello World!
###Markdown
Coroutine Examples Coroutines can be directly scheduled in the eventloop.
###Code
async def aprint(text):
await asyncio.sleep(1)
print(text)
return 42
loop.run_until_complete(aprint('Hello world!'))
###Output
Hello world!
###Markdown
You can use as many awaits as you like in a couroutine:
###Code
async def aprint_twice(text):
await asyncio.sleep(1)
print(text)
await asyncio.sleep(1)
print(text + ' (once more)')
return 42
loop.run_until_complete(aprint_twice('Hello world!'))
###Output
Hello world!
Hello world! (once more)
###Markdown
Multiple Coroutines can be combined and executed concurrently:
###Code
loop.run_until_complete(asyncio.gather(aprint('Task 1'), aprint('Task 2')))
###Output
Task 1
Task 2
###Markdown
Exceptions work just like you would expect
###Code
async def raiser():
await asyncio.sleep(1)
raise ValueError()
async def catcher():
try:
await raiser()
except ValueError:
print('caught something')
loop.run_until_complete(catcher())
###Output
caught something
###Markdown
Automatic Checks Not awaiting a coroutine raises an error.
###Code
a = aprint('Did I forget something?')
del(a)
###Output
/Users/niko/.virtualenvs/async-examples/lib/python3.5/site-packages/ipykernel/__main__.py:2: RuntimeWarning: coroutine 'aprint' was never awaited
from ipykernel import kernelapp as app
###Markdown
Awaiting something that is not awaitable raises an error.
###Code
async def fail():
await aprint
loop.run_until_complete(fail())
###Output
_____no_output_____
###Markdown
Async Context Manager
###Code
class AsyncContextManager:
async def __aenter__(self):
await aprint('entering context')
async def __aexit__(self, exc_type, exc, tb):
await aprint('exiting context')
async def use_async_context():
async with AsyncContextManager():
print('Hello World!')
loop.run_until_complete(use_async_context())
###Output
entering context
Hello World!
exiting context
###Markdown
One example is using locks (even though this doesn't require async exiting).
###Code
lock = asyncio.Lock()
async def use_lock():
async with lock:
await asyncio.sleep(1)
print('much lock, such concurrency')
loop.run_until_complete(asyncio.gather(use_lock(), use_lock()))
###Output
much lock, such concurrency
much lock, such concurrency
###Markdown
Async for-loop Prepare a simple MongoDB collection to show this feature.
###Code
from motor.motor_asyncio import AsyncIOMotorClient
collection = AsyncIOMotorClient().aiotest.test
loop.run_until_complete(collection.insert({'value': i} for i in range(10)))
###Output
_____no_output_____
###Markdown
The async for-loop saves us the boilerplate code to await each next value. Note that it runs sequentially (i.e., the elements are fetched after each other).
###Code
async def f():
async for doc in collection.find():
print(doc)
loop.run_until_complete(f())
loop.run_until_complete(collection.drop())
###Output
_____no_output_____
###Markdown
Appendix Futures Futures are awaitable as well.
###Code
import collections
isinstance(asyncio.Future(), collections.abc.Awaitable)
###Output
_____no_output_____
###Markdown
Confusion with Generators Generators exceptions do not confuse Coroutines.
###Code
async def unconfused():
g = iter(range(1))
next(g)
next(g)
await asyncio.sleep(1)
print('done!')
loop.run_until_complete(unconfused())
###Output
_____no_output_____
###Markdown
Async generators on the other hand could be confused if the optinal decorator is not used.
###Code
# @asyncio.coroutine
def confused():
g = iter(range(1))
next(g)
next(g)
yield from asyncio.sleep(1)
print('done!')
loop.run_until_complete(confused())
###Output
_____no_output_____ |
Deep.Learning/2.Neural-Networks/9.Tensorflow/intro_to_tensorflow.ipynb | ###Markdown
TensorFlow Neural Network Lab In this lab, you'll use all the tools you learned from *Introduction to TensorFlow* to label images of English letters! The data you are using, notMNIST, consists of images of a letter from A to J in different fonts.The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in! To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "`All modules imported`".
###Code
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
###Output
All modules imported.
###Markdown
The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
###Code
def download(url, file):
"""
Download file from <url>
:param url: URL to file
:param file: Local file path
"""
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
"""
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
"""
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
###Output
100%|██████████| 210001/210001 [00:49<00:00, 4217.10files/s]
100%|██████████| 10001/10001 [00:02<00:00, 4454.93files/s]
###Markdown
Problem 1The first problem involves normalizing the features for your training and test data.Implement Min-Max scaling in the `normalize_grayscale()` function to a range of `a=0.1` and `b=0.9`. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.Since the raw notMNIST image data is in [grayscale](https://en.wikipedia.org/wiki/Grayscale), the current values range from a min of 0 to a max of 255.Min-Max Scaling:$X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}$*If you're having trouble solving problem 1, you can view the solution [here](https://github.com/udacity/deep-learning/blob/master/intro-to-tensorflow/intro_to_tensorflow_solution.ipynb).*
###Code
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
"""
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
"""
# TODO: Implement Min-Max scaling for grayscale image data
X = image_data
Xmin = 0
Xmax = 255
a = 0.1
b = 0.9
taljare = (X - Xmin) * (b - a)
namnare = Xmax - Xmin
division = taljare / namnare
Xprim = a + division
return Xprim
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
###Output
Saving data to pickle file...
Data cached in pickle file.
###Markdown
CheckpointAll your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
###Code
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
###Output
/Users/scrier/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
/Users/scrier/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
Problem 2Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network. For the neural network to train on your data, you need the following float32 tensors: - `features` - Placeholder tensor for feature data (`train_features`/`valid_features`/`test_features`) - `labels` - Placeholder tensor for label data (`train_labels`/`valid_labels`/`test_labels`) - `weights` - Variable Tensor with random numbers from a truncated normal distribution. - See `tf.truncated_normal()` documentation for help. - `biases` - Variable Tensor with all zeros. - See `tf.zeros()` documentation for help.*If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available [here](intro_to_tensorflow_solution.ipynb).*
###Code
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros((labels_count)))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
###Output
Accuracy function created.
###Markdown
Problem 3Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.Parameter configurations:Configuration 1* **Epochs:** 1* **Learning Rate:** * 0.8 * 0.5 * 0.1 * 0.05 * 0.01Configuration 2* **Epochs:** * 1 * 2 * 3 * 4 * 5* **Learning Rate:** 0.2The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.*If you're having trouble solving problem 3, you can view the solution [here](intro_to_tensorflow_solution.ipynb).*
###Code
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 5
learning_rate = 0.2
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
###Output
Epoch 1/5: 100%|██████████| 1114/1114 [00:07<00:00, 142.58batches/s]
Epoch 2/5: 100%|██████████| 1114/1114 [00:08<00:00, 138.01batches/s]
Epoch 3/5: 100%|██████████| 1114/1114 [00:08<00:00, 137.67batches/s]
Epoch 4/5: 100%|██████████| 1114/1114 [00:07<00:00, 140.32batches/s]
Epoch 5/5: 100%|██████████| 1114/1114 [00:08<00:00, 136.89batches/s]
###Markdown
TestYou're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
###Code
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
###Output
Epoch 1/5: 100%|██████████| 1114/1114 [00:01<00:00, 657.32batches/s]
Epoch 2/5: 100%|██████████| 1114/1114 [00:01<00:00, 760.31batches/s]
Epoch 3/5: 100%|██████████| 1114/1114 [00:01<00:00, 737.18batches/s]
Epoch 4/5: 100%|██████████| 1114/1114 [00:01<00:00, 753.05batches/s]
Epoch 5/5: 100%|██████████| 1114/1114 [00:01<00:00, 758.58batches/s] |
engsci211_mc2_gradient.ipynb | ###Markdown
Multivariable Calculus - Gradient of a Scalar Field*ENGSCI 211 Mathematical Modelling 2* Background to NotebookThis notebook contains two demos aimed at improving your understanding of the gradient of a scalar field and the directional derivative. gradient()This demo contains two subplots.The subplot on the left shows the 3D surface associated with a function of two independent variables. The red points are the stationary points of the surface, and the black point is the one used in the gradient ascent/descent algorithm used in the other subplot.The subplot on the right shows, by default, a 2D contour plot of the surface. The red points indicate stationary points i.e. local minimum, maximum or saddle points for the surface. The black point is the starting location for an iterative gradient ascent/descent algorithm and can be modified. To see how the gradient vector points in direction of fastest increase in the function, you can increase the *iteration number* slider and change the step size to see how moving along the local gradient vector (gradient ascent), or negative gradient vector (gradient descent) can move towards a local stationary point. You can optionally show the entire gradient vector field of the surface. directionalderivative()This demo contains two subplots.The subplot on the left shows the 3D surface associated with a function of two independent variables. The red points are the stationary points of the surface, and the black point is the one used in the directional derivative calculation.The subplot on the right shows, by default, a 2D contour plot of the surface. The red points indicate stationary points i.e. local minimum, maximum or saddle points for the surface. The black point is the point at which the directional derivative is calculated. The gradient vector, $\nabla f$, at that point is shown as a blue vector. The direction, $\hat{a}$, in which we wish to determine the function's rate of change (i.e. the directional derivative) is shown as a red **unit** vector and can be modified. The dot product, $\nabla f \cdot \hat{a}$, along the direction of $\hat{a}$ is shown as a green vector and the value of $\nabla f \cdot \hat{a}$ is equal to the rate of change of $f$ along the direction $\hat{a}$ at the chosen point i.e. the directional derivative. Notebook ExamplesThere are three different examples included in this notebook. The position vector for each example is given by:$$f_4 \, (x,y) = \frac{x^3}{3} - \frac{y^3}{3} - x + y + 3$$$$f_5 \, (x,y) = x^2 - y^2$$
###Code
# run this cell prior to the others
from sourcecode_mc import gradient, directionalderivative
%matplotlib inline
gradient()
directionalderivative()
###Output
_____no_output_____ |
notebooks/tutorials/2_Node_Classification.ipynb | ###Markdown
Node Classification with Graph Neural Networks[Previous: Introduction: Hands-on Graph Neural Networks](https://colab.research.google.com/drive/1h3-vJGRVloF5zStxL5I0rSy4ZUPNsjy8)This tutorial will teach you how to apply **Graph Neural Networks (GNNs) to the task of node classification**.Here, we are given the ground-truth labels of only a small subset of nodes, and want to infer the labels for all the remaining nodes (*transductive learning*).To demonstrate, we make use of the `Cora` dataset, which is a **citation network** where nodes represent documents.Each node is described by a 1433-dimensional bag-of-words feature vector.Two documents are connected if there exists a citation link between them.The task is to infer the category of each document (7 in total).This dataset was first introduced by [Yang et al. (2016)](https://arxiv.org/abs/1603.08861) as one of the datasets of the `Planetoid` benchmark suite.We again can make use [PyTorch Geometric](https://github.com/rusty1s/pytorch_geometric) for an easy access to this dataset via [`torch_geometric.datasets.Planetoid`](https://pytorch-geometric.readthedocs.io/en/latest/modules/datasets.htmltorch_geometric.datasets.Planetoid):
###Code
from torch_geometric.datasets import Planetoid
from torch_geometric.transforms import NormalizeFeatures
dataset = Planetoid(root='data/Planetoid', name='Cora', transform=NormalizeFeatures())
print()
print(f'Dataset: {dataset}:')
print('======================')
print(f'Number of graphs: {len(dataset)}')
print(f'Number of features: {dataset.num_features}')
print(f'Number of classes: {dataset.num_classes}')
data = dataset[0] # Get the first graph object.
print()
print(data)
print('===========================================================================================================')
# Gather some statistics about the graph.
print(f'Number of nodes: {data.num_nodes}')
print(f'Number of edges: {data.num_edges}')
print(f'Average node degree: {data.num_edges / data.num_nodes:.2f}')
print(f'Number of training nodes: {data.train_mask.sum()}')
print(f'Training node label rate: {int(data.train_mask.sum()) / data.num_nodes:.2f}')
print(f'Contains isolated nodes: {data.contains_isolated_nodes()}')
print(f'Contains self-loops: {data.contains_self_loops()}')
print(f'Is undirected: {data.is_undirected()}')
###Output
Downloading https://github.com/kimiyoung/planetoid/raw/master/data/ind.cora.x
Downloading https://github.com/kimiyoung/planetoid/raw/master/data/ind.cora.tx
Downloading https://github.com/kimiyoung/planetoid/raw/master/data/ind.cora.allx
Downloading https://github.com/kimiyoung/planetoid/raw/master/data/ind.cora.y
Downloading https://github.com/kimiyoung/planetoid/raw/master/data/ind.cora.ty
Downloading https://github.com/kimiyoung/planetoid/raw/master/data/ind.cora.ally
Downloading https://github.com/kimiyoung/planetoid/raw/master/data/ind.cora.graph
Downloading https://github.com/kimiyoung/planetoid/raw/master/data/ind.cora.test.index
Processing...
Done!
Dataset: Cora():
======================
Number of graphs: 1
Number of features: 1433
Number of classes: 7
Data(edge_index=[2, 10556], test_mask=[2708], train_mask=[2708], val_mask=[2708], x=[2708, 1433], y=[2708])
===========================================================================================================
Number of nodes: 2708
Number of edges: 10556
Average node degree: 3.90
Number of training nodes: 140
Training node label rate: 0.05
Contains isolated nodes: False
Contains self-loops: False
Is undirected: True
###Markdown
Overall, this dataset is quite similar to the previously used [`KarateClub`](https://pytorch-geometric.readthedocs.io/en/latest/modules/datasets.htmltorch_geometric.datasets.KarateClub) network.We can see that the `Cora` network holds 2,708 nodes and 10,556 edges, resulting in an average node degree of 3.9.For training this dataset, we are given the ground-truth categories of 140 nodes (20 for each class).This results in a training node label rate of only 5%.In contrast to `KarateClub`, this graph holds the additional attributes `val_mask` and `test_mask`, which denotes which nodes should be used for validation and testing.Furthermore, we make use of **[data transformations](https://pytorch-geometric.readthedocs.io/en/latest/notes/introduction.htmldata-transforms) via `transform=NormalizeFeatures()`**.Transforms can be used to modify your input data before inputting them into a neural network, *e.g.*, for normalization or data augmentation.Here, we [row-normalize](https://pytorch-geometric.readthedocs.io/en/latest/modules/transforms.htmltorch_geometric.transforms.NormalizeFeatures) the bag-of-words input feature vectors.We can further see that this network is undirected, and that there exists no isolated nodes (each document has at least one citation). Training a Multi-layer Perception Network (MLP)In theory, we should be able to infer the category of a document solely based on its content, *i.e.* its bag-of-words feature representation, without taking any relational information into account.Let's verify that by constructing a simple MLP that solely operates on input node features (using shared weights across all nodes):
###Code
import torch
from torch.nn import Linear
import torch.nn.functional as F
class MLP(torch.nn.Module):
def __init__(self, hidden_channels):
super(MLP, self).__init__()
torch.manual_seed(12345)
self.lin1 = Linear(dataset.num_features, hidden_channels)
self.lin2 = Linear(hidden_channels, dataset.num_classes)
def forward(self, x):
x = self.lin1(x)
x = x.relu()
x = F.dropout(x, p=0.5, training=self.training)
x = self.lin2(x)
return x
model = MLP(hidden_channels=16)
print(model)
###Output
MLP(
(lin1): Linear(in_features=1433, out_features=16, bias=True)
(lin2): Linear(in_features=16, out_features=7, bias=True)
)
###Markdown
Our MLP is defined by two linear layers and enhanced by [ReLU](https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html?highlight=relutorch.nn.ReLU) non-linearity and [dropout](https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html?highlight=dropouttorch.nn.Dropout).Here, we first reduce the 1433-dimensional feature vector to a low-dimensional embedding (`hidden_channels=16`), while the second linear layer acts as a classifier that should map each low-dimensional node embedding to one of the 7 classes.Let's train our simple MLP by following a similar procedure as described in [the first part of this tutorial](https://colab.research.google.com/drive/1h3-vJGRVloF5zStxL5I0rSy4ZUPNsjy8).We again make use of the **cross entropy loss** and **Adam optimizer**.This time, we also define a **`test` function** to evaluate how well our final model performs on the test node set (which labels have not been observed during training).
###Code
from IPython.display import Javascript # Restrict height of output cell.
display(Javascript('''google.colab.output.setIframeHeight(0, true, {maxHeight: 300})'''))
model = MLP(hidden_channels=16)
criterion = torch.nn.CrossEntropyLoss() # Define loss criterion.
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4) # Define optimizer.
def train():
model.train()
optimizer.zero_grad() # Clear gradients.
out = model(data.x) # Perform a single forward pass.
loss = criterion(out[data.train_mask], data.y[data.train_mask]) # Compute the loss solely based on the training nodes.
loss.backward() # Derive gradients.
optimizer.step() # Update parameters based on gradients.
return loss
def test():
model.eval()
out = model(data.x)
pred = out.argmax(dim=1) # Use the class with highest probability.
test_correct = pred[data.test_mask] == data.y[data.test_mask] # Check against ground-truth labels.
test_acc = int(test_correct.sum()) / int(data.test_mask.sum()) # Derive ratio of correct predictions.
return test_acc
for epoch in range(1, 201):
loss = train()
print(f'Epoch: {epoch:03d}, Loss: {loss:.4f}')
###Output
_____no_output_____
###Markdown
After training the model, we can call the `test` function to see how well our model performs on unseen labels.Here, we are interested in the accuracy of the model, *i.e.*, the ratio of correctly classified nodes:
###Code
test_acc = test()
print(f'Test Accuracy: {test_acc:.4f}')
###Output
Test Accuracy: 0.5900
###Markdown
As one can see, our MLP performs rather bad with only about 59% test accuracy.But why does the MLP do not perform better?The main reason for that is that this model suffers from heavy overfitting due to only a **small amount of training nodes**, and therefore generalizes poorly to unseen node representations.It also fails to incorporate an important bias into the model: **Cited papers are very likely related to the category of a document**.That is exactly where Graph Neural Networks come into play and can help to boost the performance of our model. Training a Graph Neural Network (GNN)We can easily convert our MLP to a GNN by swapping the `torch.nn.Linear` layers with PyG's GNN operators.Following-up on [the first part of this tutorial](https://colab.research.google.com/drive/1h3-vJGRVloF5zStxL5I0rSy4ZUPNsjy8), we replace the linear layers by the [`GCNConv`](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.htmltorch_geometric.nn.conv.GCNConv) module.To recap, the **GCN layer** ([Kipf et al. (2017)](https://arxiv.org/abs/1609.02907)) is defined as$$\mathbf{x}_v^{(\ell + 1)} = \mathbf{W}^{(\ell + 1)} \sum_{w \in \mathcal{N}(v) \, \cup \, \{ v \}} \frac{1}{c_{w,v}} \cdot \mathbf{x}_w^{(\ell)}$$where $\mathbf{W}^{(\ell + 1)}$ denotes a trainable weight matrix of shape `[num_output_features, num_input_features]` and $c_{w,v}$ refers to a fixed normalization coefficient for each edge.In contrast, a single linear layer is defined as$$\mathbf{x}_v^{(\ell + 1)} = \mathbf{W}^{(\ell + 1)} \mathbf{x}_v^{(\ell)}$$which does not make use of neighboring node information.
###Code
from torch_geometric.nn import GCNConv
class GCN(torch.nn.Module):
def __init__(self, hidden_channels):
super(GCN, self).__init__()
torch.manual_seed(12345)
self.conv1 = GCNConv(dataset.num_features, hidden_channels)
self.conv2 = GCNConv(hidden_channels, dataset.num_classes)
def forward(self, x, edge_index):
x = self.conv1(x, edge_index)
x = x.relu()
x = F.dropout(x, p=0.5, training=self.training)
x = self.conv2(x, edge_index)
return x
model = GCN(hidden_channels=16)
print(model)
###Output
GCN(
(conv1): GCNConv(1433, 16)
(conv2): GCNConv(16, 7)
)
###Markdown
Let's visualize the node embeddings of our **untrained** GCN network.For visualization, we make use of [**TSNE**](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) to embed our 7-dimensional node embeddings onto a 2D plane.
###Code
model = GCN(hidden_channels=16)
model.eval()
out = model(data.x, data.edge_index)
visualize(out, color=data.y)
###Output
_____no_output_____
###Markdown
As one can see, there is at least *some kind* of clustering (*e.g.*, for the "blue" nodes), but we certainly can do better by training our model.The training and testing procedure is once again the same, but this time we make use of the node features `x` **and** the graph connectivity `edge_index` as input to our GCN model.
###Code
from IPython.display import Javascript # Restrict height of output cell.
display(Javascript('''google.colab.output.setIframeHeight(0, true, {maxHeight: 300})'''))
model = GCN(hidden_channels=16)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
criterion = torch.nn.CrossEntropyLoss()
def train():
model.train()
optimizer.zero_grad() # Clear gradients.
out = model(data.x, data.edge_index) # Perform a single forward pass.
loss = criterion(out[data.train_mask], data.y[data.train_mask]) # Compute the loss solely based on the training nodes.
loss.backward() # Derive gradients.
optimizer.step() # Update parameters based on gradients.
return loss
def test():
model.eval()
out = model(data.x, data.edge_index)
pred = out.argmax(dim=1) # Use the class with highest probability.
test_correct = pred[data.test_mask] == data.y[data.test_mask] # Check against ground-truth labels.
test_acc = int(test_correct.sum()) / int(data.test_mask.sum()) # Derive ratio of correct predictions.
return test_acc
for epoch in range(1, 201):
loss = train()
print(f'Epoch: {epoch:03d}, Loss: {loss:.4f}')
###Output
_____no_output_____
###Markdown
After training the model, we can check its test accuracy:
###Code
test_acc = test()
print(f'Test Accuracy: {test_acc:.4f}')
###Output
Test Accuracy: 0.8140
###Markdown
**There it is!**By simply swapping the linear layers with GNN layers, we can reach **81.4% of test accuracy**!This is in stark contrast to the 59% of test accuracy obtained by our MLP, indicating that relational information plays a crucial role in obtaining better performance.We can also verify that once again by looking at the output embeddings of our **trained** model, which now produces a far better clustering of nodes of the same category.
###Code
model.eval()
out = model(data.x, data.edge_index)
visualize(out, color=data.y)
###Output
_____no_output_____
###Markdown
ConclusionIn this chapter, you have seen how to apply GNNs to real-world problems, and, in particular, how they can effectively be used for boosting a model's performance.In the next section, we will look into how GNNs can be used for the task of graph classification.[Next: Graph Classification with Graph Neural Networks](https://colab.research.google.com/drive/1I8a0DfQ3fI7Njc62__mVXUlcAleUclnb) (Optional) Exercises1. To achieve better model performance and to avoid overfitting, it is usually a good idea to select the best model based on an additional validation set.The `Cora` dataset provides a validation node set as `data.val_mask`, but we haven't used it yet.Can you modify the code to select and test the model with the highest validation performance?This should bring test performance to **82% accuracy**.2. How does `GCN` behave when increasing the hidden feature dimensionality or the number of layers?Does increasing the number of layers help at all?
###Code
from torch_geometric.nn import GCNConv, GATConv
class GATGCN(torch.nn.Module):
def __init__(self, hidden_channels):
super(GATGCN, self).__init__()
torch.manual_seed(12345)
self.conv1 = GATConv(dataset.num_features, dataset.num_classes, heads=3, dropout=0, concat=False)
#self.conv2 = GATConv(hidden_channels, hidden_channels, heads=3, dropout=0, concat=False)
#self.conv3 = GATConv(hidden_channels, hidden_channels, heads=3, dropout=0, concat=False)
#self.conv4 = GATConv(hidden_channels, dataset.num_classes, heads=3, dropout=0, concat=False)
def forward(self, x, edge_index):
x = self.conv1(x, edge_index)
return x
GAT_model = GATGCN(hidden_channels=16)
print(GAT_model)
GAT_model = GATGCN(hidden_channels=16)
optimizer = torch.optim.Adam(GAT_model.parameters(), lr=0.01, weight_decay=5e-4)
criterion = torch.nn.CrossEntropyLoss()
def GAT_train():
GAT_model.train()
optimizer.zero_grad() # Clear gradients.
out = GAT_model(data.x, data.edge_index) # Perform a single forward pass.
loss = criterion(out[data.train_mask], data.y[data.train_mask]) # Compute the loss solely based on the training nodes.
loss.backward() # Derive gradients.
optimizer.step() # Update parameters based on gradients.
return loss
def GAT_test():
GAT_model.eval()
out = GAT_model(data.x, data.edge_index)
pred = out.argmax(dim=1) # Use the class with highest probability.
test_correct = pred[data.test_mask] == data.y[data.test_mask] # Check against ground-truth labels.
test_acc = int(test_correct.sum()) / int(data.test_mask.sum()) # Derive ratio of correct predictions.
return test_acc
for epoch in range(1, 501):
loss = train()
print(f'Epoch: {epoch:03d}, Loss: {loss:.4f}')
test_acc = test()
print(f'Test Accuracy: {test_acc:.4f}')
###Output
Test Accuracy: 0.6130
|
Lectures/week39.ipynb | ###Markdown
Week 39: Optimization and Gradient Methods **Morten Hjorth-Jensen**, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State UniversityDate: **Sep 28, 2021**Copyright 1999-2021, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license Plan for week 39* Thursday: Repetition of Logistic regression equations and classification problems and discussion of Gradient methods. Discussion of project 1 and examples on how to implement Logistic Regression* Friday: Stochastic Gradient descent with examples and automatic differentiation* Reading recommendations:See [lecture notes for week 39](https://compphysics.github.io/MachineLearning/doc/web/course.html).For a good discussion on gradient methods, see Goodfellow et al section 4.3-4.5 and chapter 8. We will come back to the latter chapter in our discussion of Neural networks as well. Thursday September 30[Overview Video, why do we care about gradient methods?](https://www.uio.no/studier/emner/matnat/fys/FYS-STK3155/h20/forelesningsvideoer/OverarchingAimsWeek39.mp4?vrtx=view-as-webpage) Searching for Optimal Regularization Parameters $\lambda$In project 1, when using Ridge and Lasso regression, we end upsearching for the optimal parameter $\lambda$ which minimizes ourselected scores (MSE or $R2$ values for example). The brute forceapproach, as discussed in the code here for Ridge regression, consistsin evaluating the MSE as function of different $\lambda$ values.Based on these calculations, one tries then to determine the value of the hyperparameter $\lambda$which results in optimal scores (for example the smallest MSE or an $R2=1$).
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import linear_model
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
# A seed just to ensure that the random numbers are the same for every run.
# Useful for eventual debugging.
np.random.seed(2021)
n = 100
x = np.random.rand(n)
y = np.exp(-x**2) + 1.5 * np.exp(-(x-2)**2)+ np.random.randn(n)
Maxpolydegree = 5
X = np.zeros((n,Maxpolydegree-1))
for degree in range(1,Maxpolydegree): #No intercept column
X[:,degree-1] = x**(degree)
# We split the data in test and training data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Decide which values of lambda to use
nlambdas = 500
MSERidgePredict = np.zeros(nlambdas)
lambdas = np.logspace(-4, 2, nlambdas)
for i in range(nlambdas):
lmb = lambdas[i]
RegRidge = linear_model.Ridge(lmb)
RegRidge.fit(X_train,y_train)
ypredictRidge = RegRidge.predict(X_test)
MSERidgePredict[i] = MSE(y_test,ypredictRidge)
# Now plot the results
plt.figure()
plt.plot(np.log10(lambdas), MSERidgePredict, 'g--', label = 'MSE SL Ridge Test')
plt.xlabel('log10(lambda)')
plt.ylabel('MSE')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Here we have performed a rather data greedy calculation as function of the regularization parameter $\lambda$. There is no resampling here. The latter can easily be added by employing the function **RidgeCV** instead of just calling the **Ridge** function. For **RidgeCV** we need to passe the array of $\lambda$ values.By inspecting the figure we can in turn determine which is the optimal regularization parameter.This becomes however less functional in the long run. Grid SearchAn alternative is to use the so-called grid search functionalityincluded with the library **Scikit-Learn**, as demonstrated for the sameexample here.
###Code
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Ridge
from sklearn.model_selection import GridSearchCV
def R2(y_data, y_model):
return 1 - np.sum((y_data - y_model) ** 2) / np.sum((y_data - np.mean(y_data)) ** 2)
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
# A seed just to ensure that the random numbers are the same for every run.
# Useful for eventual debugging.
np.random.seed(2021)
n = 100
x = np.random.rand(n)
y = np.exp(-x**2) + 1.5 * np.exp(-(x-2)**2)+ np.random.randn(n)
Maxpolydegree = 5
X = np.zeros((n,Maxpolydegree-1))
for degree in range(1,Maxpolydegree): #No intercept column
X[:,degree-1] = x**(degree)
# We split the data in test and training data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Decide which values of lambda to use
nlambdas = 10
lambdas = np.logspace(-4, 2, nlambdas)
# create and fit a ridge regression model, testing each alpha
model = Ridge()
gridsearch = GridSearchCV(estimator=model, param_grid=dict(alpha=lambdas))
gridsearch.fit(X_train, y_train)
print(gridsearch)
ypredictRidge = gridsearch.predict(X_test)
# summarize the results of the grid search
print(f"Best estimated lambda-value: {gridsearch.best_estimator_.alpha}")
print(f"MSE score: {MSE(y_test,ypredictRidge)}")
print(f"R2 score: {R2(y_test,ypredictRidge)}")
###Output
GridSearchCV(estimator=Ridge(),
param_grid={'alpha': array([1.00000000e-04, 4.64158883e-04, 2.15443469e-03, 1.00000000e-02,
4.64158883e-02, 2.15443469e-01, 1.00000000e+00, 4.64158883e+00,
2.15443469e+01, 1.00000000e+02])})
Best estimated lambda-value: 100.0
MSE score: 1.0892144853354966
R2 score: -0.0038332550504751595
###Markdown
*By default the grid search function includes cross validation with five folds.* The [Scikit-Learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.htmlsklearn.model_selection.GridSearchCV) contains more information on how to set the different parameters. Randomized Grid SearchAn alternative to the above manual grid set up, is to use a randomsearch where the parameters are tuned from a random distribution(uniform below) for a fixed number of iterations. A model isconstructed and evaluated for each combination of chosen parameters.We repeat the previous example but **now with a random search**.
###Code
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Ridge
from sklearn.model_selection import GridSearchCV
from scipy.stats import uniform as randuniform
from sklearn.model_selection import RandomizedSearchCV
def R2(y_data, y_model):
return 1 - np.sum((y_data - y_model) ** 2) / np.sum((y_data - np.mean(y_data)) ** 2)
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
# A seed just to ensure that the random numbers are the same for every run.
# Useful for eventual debugging.
np.random.seed(2021)
n = 100
x = np.random.rand(n)
y = np.exp(-x**2) + 1.5 * np.exp(-(x-2)**2)+ np.random.randn(n)
Maxpolydegree = 5
X = np.zeros((n,Maxpolydegree-1))
for degree in range(1,Maxpolydegree): #No intercept column
X[:,degree-1] = x**(degree)
# We split the data in test and training data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
param_grid = {'alpha': randuniform()} #######################################
# create and fit a ridge regression model, testing each alpha
model = Ridge()
gridsearch = RandomizedSearchCV(estimator=model, param_distributions=param_grid, n_iter=100)
gridsearch.fit(X_train, y_train)
print(gridsearch)
ypredictRidge = gridsearch.predict(X_test)
# summarize the results of the grid search
print(f"Best estimated lambda-value: {gridsearch.best_estimator_.alpha}")
print(f"MSE score: {MSE(y_test,ypredictRidge)}")
print(f"R2 score: {R2(y_test,ypredictRidge)}")
###Output
RandomizedSearchCV(estimator=Ridge(), n_iter=100,
param_distributions={'alpha': <scipy.stats._distn_infrastructure.rv_frozen object at 0x7f7f67f9b370>})
Best estimated lambda-value: 0.9849967686928113
MSE score: 1.0853136633465326
R2 score: -0.0002382102844775691
###Markdown
Optimization, the central part of any Machine Learning algortithmAlmost every problem in machine learning and data science starts witha dataset $X$, a model $g(\beta)$, which is a function of theparameters $\beta$ and a cost function $C(X, g(\beta))$ that allowsus to judge how well the model $g(\beta)$ explains the observations$X$. The model is fit by finding the values of $\beta$ that minimizethe cost function. Ideally we would be able to solve for $\beta$analytically, however this is not possible in general and we must usesome approximative/numerical method to compute the minimum. Revisiting our Logistic Regression caseIn our discussion on Logistic Regression we studied the case oftwo classes, with $y_i$ either$0$ or $1$. Furthermore we assumed also that we have only twoparameters $\beta$ in our fitting, that is wedefined probabilities $$\begin{align*}p(y_i=1|x_i,\boldsymbol{\beta}) &= \frac{\exp{(\beta_0+\beta_1x_i)}}{1+\exp{(\beta_0+\beta_1x_i)}},\nonumber\\p(y_i=0|x_i,\boldsymbol{\beta}) &= 1 - p(y_i=1|x_i,\boldsymbol{\beta}),\end{align*}$$ where $\boldsymbol{\beta}$ are the weights we wish to extract from data, in our case $\beta_0$ and $\beta_1$. The equations to solveOur compact equations used a definition of a vector $\boldsymbol{y}$ with $n$elements $y_i$, an $n\times p$ matrix $\boldsymbol{X}$ which contains the$x_i$ values and a vector $\boldsymbol{p}$ of fitted probabilities$p(y_i\vert x_i,\boldsymbol{\beta})$. We rewrote in a more compact formthe first derivative of the cost function as $$\frac{\partial \mathcal{C}(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = -\boldsymbol{X}^T\left(\boldsymbol{y}-\boldsymbol{p}\right).$$ If we in addition define a diagonal matrix $\boldsymbol{W}$ with elements $p(y_i\vert x_i,\boldsymbol{\beta})(1-p(y_i\vert x_i,\boldsymbol{\beta})$, we can obtain a compact expression of the second derivative as $$\frac{\partial^2 \mathcal{C}(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}\partial \boldsymbol{\beta}^T} = \boldsymbol{X}^T\boldsymbol{W}\boldsymbol{X}.$$ This defines what is called the Hessian matrix. Solving using Newton-Raphson's methodIf we can set up these equations, Newton-Raphson's iterative method is normally the method of choice. It requires however that we can compute in an efficient way the matrices that define the first and second derivatives. Our iterative scheme is then given by $$\boldsymbol{\beta}^{\mathrm{new}} = \boldsymbol{\beta}^{\mathrm{old}}-\left(\frac{\partial^2 \mathcal{C}(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}\partial \boldsymbol{\beta}^T}\right)^{-1}_{\boldsymbol{\beta}^{\mathrm{old}}}\times \left(\frac{\partial \mathcal{C}(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}}\right)_{\boldsymbol{\beta}^{\mathrm{old}}},$$ or in matrix form as $$\boldsymbol{\beta}^{\mathrm{new}} = \boldsymbol{\beta}^{\mathrm{old}}-\left(\boldsymbol{X}^T\boldsymbol{W}\boldsymbol{X} \right)^{-1}\times \left(-\boldsymbol{X}^T(\boldsymbol{y}-\boldsymbol{p}) \right)_{\boldsymbol{\beta}^{\mathrm{old}}}.$$ The right-hand side is computed with the old values of $\beta$. If we can compute these matrices, in particular the Hessian, the above is often the easiest method to implement. Brief reminder on Newton-Raphson's methodLet us quickly remind ourselves how we derive the above method.Perhaps the most celebrated of all one-dimensional root-findingroutines is Newton's method, also called the Newton-Raphsonmethod. This method requires the evaluation of both thefunction $f$ and its derivative $f'$ at arbitrary points. If you can only calculate the derivativenumerically and/or your function is not of the smooth type, wenormally discourage the use of this method. The equationsThe Newton-Raphson formula consists geometrically of extending thetangent line at a current point until it crosses zero, then settingthe next guess to the abscissa of that zero-crossing. The mathematicsbehind this method is rather simple. Employing a Taylor expansion for$x$ sufficiently close to the solution $s$, we have $$f(s)=0=f(x)+(s-x)f'(x)+\frac{(s-x)^2}{2}f''(x) +\dots. \label{eq:taylornr} \tag{1}$$ For small enough values of the function and for well-behavedfunctions, the terms beyond linear are unimportant, hence we obtain $$f(x)+(s-x)f'(x)\approx 0,$$ yielding $$s\approx x-\frac{f(x)}{f'(x)}.$$ Having in mind an iterative procedure, it is natural to start iterating with $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}.$$ Simple geometric interpretationThe above is Newton-Raphson's method. It has a simple geometricinterpretation, namely $x_{n+1}$ is the point where the tangent from$(x_n,f(x_n))$ crosses the $x$-axis. Close to the solution,Newton-Raphson converges fast to the desired result. However, if weare far from a root, where the higher-order terms in the series areimportant, the Newton-Raphson formula can give grossly inaccurateresults. For instance, the initial guess for the root might be so farfrom the true root as to let the search interval include a localmaximum or minimum of the function. If an iteration places a trialguess near such a local extremum, so that the first derivative nearlyvanishes, then Newton-Raphson may fail totally Extending to more than one variableNewton's method can be generalized to systems of several non-linear equationsand variables. Consider the case with two equations $$\begin{array}{cc} f_1(x_1,x_2) &=0\\ f_2(x_1,x_2) &=0,\end{array}$$ which we Taylor expand to obtain $$\begin{array}{cc} 0=f_1(x_1+h_1,x_2+h_2)=&f_1(x_1,x_2)+h_1 \partial f_1/\partial x_1+h_2 \partial f_1/\partial x_2+\dots\\ 0=f_2(x_1+h_1,x_2+h_2)=&f_2(x_1,x_2)+h_1 \partial f_2/\partial x_1+h_2 \partial f_2/\partial x_2+\dots \end{array}.$$ Defining the Jacobian matrix ${\bf \boldsymbol{J}}$ we have $${\bf \boldsymbol{J}}=\left( \begin{array}{cc} \partial f_1/\partial x_1 & \partial f_1/\partial x_2 \\ \partial f_2/\partial x_1 &\partial f_2/\partial x_2 \end{array} \right),$$ we can rephrase Newton's method as $$\left(\begin{array}{c} x_1^{n+1} \\ x_2^{n+1} \end{array} \right)=\left(\begin{array}{c} x_1^{n} \\ x_2^{n} \end{array} \right)+\left(\begin{array}{c} h_1^{n} \\ h_2^{n} \end{array} \right),$$ where we have defined $$\left(\begin{array}{c} h_1^{n} \\ h_2^{n} \end{array} \right)= -{\bf \boldsymbol{J}}^{-1} \left(\begin{array}{c} f_1(x_1^{n},x_2^{n}) \\ f_2(x_1^{n},x_2^{n}) \end{array} \right).$$ We need thus to compute the inverse of the Jacobian matrix and itis to understand that difficulties mayarise in case ${\bf \boldsymbol{J}}$ is nearly singular.It is rather straightforward to extend the above scheme to systems ofmore than two non-linear equations. In our case, the Jacobian matrix is given by the Hessian that represents the second derivative of cost function. Steepest descentThe basic idea of gradient descent isthat a function $F(\mathbf{x})$, $\mathbf{x} \equiv (x_1,\cdots,x_n)$, decreases fastest if one goes from $\bf {x}$ in thedirection of the negative gradient $-\nabla F(\mathbf{x})$.It can be shown that if $$\mathbf{x}_{k+1} = \mathbf{x}_k - \gamma_k \nabla F(\mathbf{x}_k),$$ with $\gamma_k > 0$.For $\gamma_k$ small enough, then $F(\mathbf{x}_{k+1}) \leqF(\mathbf{x}_k)$. This means that for a sufficiently small $\gamma_k$we are always moving towards smaller function values, i.e a minimum. More on Steepest descentThe previous observation is the basis of the method of steepestdescent, which is also referred to as just gradient descent (GD). Onestarts with an initial guess $\mathbf{x}_0$ for a minimum of $F$ andcomputes new approximations according to $$\mathbf{x}_{k+1} = \mathbf{x}_k - \gamma_k \nabla F(\mathbf{x}_k), \ \ k \geq 0.$$ The parameter $\gamma_k$ is often referred to as the step length orthe learning rate within the context of Machine Learning. The idealIdeally the sequence $\{\mathbf{x}_k \}_{k=0}$ converges to a globalminimum of the function $F$. In general we do not know if we are in aglobal or local minimum. In the special case when $F$ is a convexfunction, all local minima are also global minima, so in this casegradient descent can converge to the global solution. The advantage ofthis scheme is that it is conceptually simple and straightforward toimplement. However the method in this form has some severelimitations:In machine learing we are often faced with non-convex high dimensionalcost functions with many local minima. Since GD is deterministic wewill get stuck in a local minimum, if the method converges, unless wehave a very good intial guess. This also implies that the scheme issensitive to the chosen initial condition.Note that the gradient is a function of $\mathbf{x} =(x_1,\cdots,x_n)$ which makes it expensive to compute numerically. The sensitiveness of the gradient descentThe gradient descent method is sensitive to the choice of learning rate $\gamma_k$. This is dueto the fact that we are only guaranteed that $F(\mathbf{x}_{k+1}) \leqF(\mathbf{x}_k)$ for sufficiently small $\gamma_k$. The problem is todetermine an optimal learning rate. If the learning rate is chosen toosmall the method will take a long time to converge and if it is toolarge we can experience erratic behavior.Many of these shortcomings can be alleviated by introducingrandomness. One such method is that of Stochastic Gradient Descent(SGD), see below. Convex functionsIdeally we want our cost/loss function to be convex(concave).First we give the definition of a convex set: A set $C$ in$\mathbb{R}^n$ is said to be convex if, for all $x$ and $y$ in $C$ andall $t \in (0,1)$ , the point $(1 − t)x + ty$ also belongs toC. Geometrically this means that every point on the line segmentconnecting $x$ and $y$ is in $C$ as discussed below.The convex subsets of $\mathbb{R}$ are the intervals of$\mathbb{R}$. Examples of convex sets of $\mathbb{R}^2$ are theregular polygons (triangles, rectangles, pentagons, etc...). Convex function**Convex function**: Let $X \subset \mathbb{R}^n$ be a convex set. Assume that the function $f: X \rightarrow \mathbb{R}$ is continuous, then $f$ is said to be convex if $$f(tx_1 + (1-t)x_2) \leq tf(x_1) + (1-t)f(x_2) $$ for all $x_1, x_2 \in X$ and for all $t \in [0,1]$. If $\leq$ is replaced with a strict inequaltiy in the definition, we demand $x_1 \neq x_2$ and $t\in(0,1)$ then $f$ is said to be strictly convex. For a single variable function, convexity means that if you draw a straight line connecting $f(x_1)$ and $f(x_2)$, the value of the function on the interval $[x_1,x_2]$ is always below the line as illustrated below. Conditions on convex functionsIn the following we state first and second-order conditions whichensures convexity of a function $f$. We write $D_f$ to denote thedomain of $f$, i.e the subset of $R^n$ where $f$ is defined. For moredetails and proofs we refer to: [S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press](http://stanford.edu/boyd/cvxbook/, 2004).**First order condition.**Suppose $f$ is differentiable (i.e $\nabla f(x)$ is well defined forall $x$ in the domain of $f$). Then $f$ is convex if and only if $D_f$is a convex set and $$f(y) \geq f(x) + \nabla f(x)^T (y-x) $$ holdsfor all $x,y \in D_f$. This condition means that for a convex functionthe first order Taylor expansion (right hand side above) at any pointa global under estimator of the function. To convince yourself you canmake a drawing of $f(x) = x^2+1$ and draw the tangent line to $f(x)$ andnote that it is always below the graph.**Second order condition.**Assume that $f$ is twicedifferentiable, i.e the Hessian matrix exists at each point in$D_f$. Then $f$ is convex if and only if $D_f$ is a convex set and itsHessian is positive semi-definite for all $x\in D_f$. For asingle-variable function this reduces to $f''(x) \geq 0$. Geometrically this means that $f$ has nonnegative curvatureeverywhere.This condition is particularly useful since it gives us an procedure for determining if the function under consideration is convex, apart from using the definition. More on convex functionsThe next result is of great importance to us and the reason why we aregoing on about convex functions. In machine learning we frequentlyhave to minimize a loss/cost function in order to find the bestparameters for the model we are considering. Ideally we want theglobal minimum (for high-dimensional models it is hard to knowif we have local or global minimum). However, if the cost/loss functionis convex the following result provides invaluable information:**Any minimum is global for convex functions.**Consider the problem of finding $x \in \mathbb{R}^n$ such that $f(x)$is minimal, where $f$ is convex and differentiable. Then, any point$x^*$ that satisfies $\nabla f(x^*) = 0$ is a global minimum.This result means that if we know that the cost/loss function is convex and we are able to find a minimum, we are guaranteed that it is a global minimum. Some simple problems1. Show that $f(x)=x^2$ is convex for $x \in \mathbb{R}$ using the definition of convexity. Hint: If you re-write the definition, $f$ is convex if the following holds for all $x,y \in D_f$ and any $\lambda \in [0,1]$ $\lambda f(x)+(1-\lambda)f(y)-f(\lambda x + (1-\lambda) y ) \geq 0$.2. Using the second order condition show that the following functions are convex on the specified domain. * $f(x) = e^x$ is convex for $x \in \mathbb{R}$. * $g(x) = -\ln(x)$ is convex for $x \in (0,\infty)$.3. Let $f(x) = x^2$ and $g(x) = e^x$. Show that $f(g(x))$ and $g(f(x))$ is convex for $x \in \mathbb{R}$. Also show that if $f(x)$ is any convex function than $h(x) = e^{f(x)}$ is convex.4. A norm is any function that satisfy the following properties * $f(\alpha x) = |\alpha| f(x)$ for all $\alpha \in \mathbb{R}$. * $f(x+y) \leq f(x) + f(y)$ * $f(x) \leq 0$ for all $x \in \mathbb{R}^n$ with equality if and only if $x = 0$Using the definition of convexity, try to show that a function satisfying the properties above is convex (the third condition is not needed to show this). Standard steepest descentBefore we proceed, we would like to discuss the approach called the**standard Steepest descent** (different from the above steepest descent discussion), which again leads to us having to be ableto compute a matrix. It belongs to the class of Conjugate Gradient methods (CG).[The success of the CG method](https://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf)for finding solutions of non-linear problems is based on the theoryof conjugate gradients for linear systems of equations. It belongs tothe class of iterative methods for solving problems from linearalgebra of the type $$\boldsymbol{A}\boldsymbol{x} = \boldsymbol{b}.$$ In the iterative process we end up with a problem like $$\boldsymbol{r}= \boldsymbol{b}-\boldsymbol{A}\boldsymbol{x},$$ where $\boldsymbol{r}$ is the so-called residual or error in the iterative process.When we have found the exact solution, $\boldsymbol{r}=0$. Gradient methodThe residual is zero when we reach the minimum of the quadratic equation $$P(\boldsymbol{x})=\frac{1}{2}\boldsymbol{x}^T\boldsymbol{A}\boldsymbol{x} - \boldsymbol{x}^T\boldsymbol{b},$$ with the constraint that the matrix $\boldsymbol{A}$ is positive definite andsymmetric. This defines also the Hessian and we want it to be positive definite. Steepest descent methodWe denote the initial guess for $\boldsymbol{x}$ as $\boldsymbol{x}_0$. We can assume without loss of generality that $$\boldsymbol{x}_0=0,$$ or consider the system $$\boldsymbol{A}\boldsymbol{z} = \boldsymbol{b}-\boldsymbol{A}\boldsymbol{x}_0,$$ instead. Steepest descent methodOne can show that the solution $\boldsymbol{x}$ is also the unique minimizer of the quadratic form $$f(\boldsymbol{x}) = \frac{1}{2}\boldsymbol{x}^T\boldsymbol{A}\boldsymbol{x} - \boldsymbol{x}^T \boldsymbol{x} , \quad \boldsymbol{x}\in\mathbf{R}^n.$$ This suggests taking the first basis vector $\boldsymbol{r}_1$ (see below for definition) to be the gradient of $f$ at $\boldsymbol{x}=\boldsymbol{x}_0$, which equals $$\boldsymbol{A}\boldsymbol{x}_0-\boldsymbol{b},$$ and $\boldsymbol{x}_0=0$ it is equal $-\boldsymbol{b}$. Final expressionsWe can compute the residual iteratively as $$\boldsymbol{r}_{k+1}=\boldsymbol{b}-\boldsymbol{A}\boldsymbol{x}_{k+1},$$ which equals $$\boldsymbol{b}-\boldsymbol{A}(\boldsymbol{x}_k+\alpha_k\boldsymbol{r}_k),$$ or $$(\boldsymbol{b}-\boldsymbol{A}\boldsymbol{x}_k)-\alpha_k\boldsymbol{A}\boldsymbol{r}_k,$$ which gives $$\alpha_k = \frac{\boldsymbol{r}_k^T\boldsymbol{r}_k}{\boldsymbol{r}_k^T\boldsymbol{A}\boldsymbol{r}_k}$$ leading to the iterative scheme $$\boldsymbol{x}_{k+1}=\boldsymbol{x}_k-\alpha_k\boldsymbol{r}_{k},$$ Steepest descent example
###Code
import numpy as np
import numpy.linalg as la
import scipy.optimize as sopt
import matplotlib.pyplot as pt
from mpl_toolkits.mplot3d import axes3d
def f(x):
return 0.5*x[0]**2 + 2.5*x[1]**2
def df(x):
return np.array([x[0], 5*x[1]])
fig = pt.figure()
ax = fig.gca(projection="3d")
xmesh, ymesh = np.mgrid[-2:2:50j,-2:2:50j]
fmesh = f(np.array([xmesh, ymesh]))
ax.plot_surface(xmesh, ymesh, fmesh)
###Output
_____no_output_____
###Markdown
And then as countor plot
###Code
pt.axis("equal")
pt.contour(xmesh, ymesh, fmesh)
guesses = [np.array([2, 2./5])]
###Output
_____no_output_____
###Markdown
Find guesses
###Code
x = guesses[-1]
s = -df(x)
###Output
_____no_output_____
###Markdown
Run it!
###Code
def f1d(alpha):
return f(x + alpha*s)
alpha_opt = sopt.golden(f1d)
next_guess = x + alpha_opt * s
guesses.append(next_guess)
print(next_guess)
###Output
[ 1.33333333 -0.26666667]
###Markdown
What happened?
###Code
pt.axis("equal")
pt.contour(xmesh, ymesh, fmesh, 50)
it_array = np.array(guesses)
pt.plot(it_array.T[0], it_array.T[1], "x-")
###Output
_____no_output_____
###Markdown
Conjugate gradient methodIn the CG method we define so-called conjugate directions and two vectors $\boldsymbol{s}$ and $\boldsymbol{t}$are said to beconjugate if $$\boldsymbol{s}^T\boldsymbol{A}\boldsymbol{t}= 0.$$ The philosophy of the CG method is to perform searches in various conjugate directionsof our vectors $\boldsymbol{x}_i$ obeying the above criterion, namely $$\boldsymbol{x}_i^T\boldsymbol{A}\boldsymbol{x}_j= 0.$$ Two vectors are conjugate if they are orthogonal with respect to this inner product. Being conjugate is a symmetric relation: if $\boldsymbol{s}$ is conjugate to $\boldsymbol{t}$, then $\boldsymbol{t}$ is conjugate to $\boldsymbol{s}$. Conjugate gradient methodAn example is given by the eigenvectors of the matrix $$\boldsymbol{v}_i^T\boldsymbol{A}\boldsymbol{v}_j= \lambda\boldsymbol{v}_i^T\boldsymbol{v}_j,$$ which is zero unless $i=j$. Conjugate gradient methodAssume now that we have a symmetric positive-definite matrix $\boldsymbol{A}$ of size$n\times n$. At each iteration $i+1$ we obtain the conjugate direction of a vector $$\boldsymbol{x}_{i+1}=\boldsymbol{x}_{i}+\alpha_i\boldsymbol{p}_{i}.$$ We assume that $\boldsymbol{p}_{i}$ is a sequence of $n$ mutually conjugate directions. Then the $\boldsymbol{p}_{i}$ form a basis of $R^n$ and we can expand the solution $ \boldsymbol{A}\boldsymbol{x} = \boldsymbol{b}$ in this basis, namely $$\boldsymbol{x} = \sum^{n}_{i=1} \alpha_i \boldsymbol{p}_i.$$ Conjugate gradient methodThe coefficients are given by $$\mathbf{A}\mathbf{x} = \sum^{n}_{i=1} \alpha_i \mathbf{A} \mathbf{p}_i = \mathbf{b}.$$ Multiplying with $\boldsymbol{p}_k^T$ from the left gives $$\boldsymbol{p}_k^T \boldsymbol{A}\boldsymbol{x} = \sum^{n}_{i=1} \alpha_i\boldsymbol{p}_k^T \boldsymbol{A}\boldsymbol{p}_i= \boldsymbol{p}_k^T \boldsymbol{b},$$ and we can define the coefficients $\alpha_k$ as $$\alpha_k = \frac{\boldsymbol{p}_k^T \boldsymbol{b}}{\boldsymbol{p}_k^T \boldsymbol{A} \boldsymbol{p}_k}$$ Conjugate gradient method and iterationsIf we choose the conjugate vectors $\boldsymbol{p}_k$ carefully, then we may not need all of them to obtain a good approximation to the solution $\boldsymbol{x}$. We want to regard the conjugate gradient method as an iterative method. This will us to solve systems where $n$ is so large that the direct method would take too much time.We denote the initial guess for $\boldsymbol{x}$ as $\boldsymbol{x}_0$. We can assume without loss of generality that $$\boldsymbol{x}_0=0,$$ or consider the system $$\boldsymbol{A}\boldsymbol{z} = \boldsymbol{b}-\boldsymbol{A}\boldsymbol{x}_0,$$ instead. Conjugate gradient methodOne can show that the solution $\boldsymbol{x}$ is also the unique minimizer of the quadratic form $$f(\boldsymbol{x}) = \frac{1}{2}\boldsymbol{x}^T\boldsymbol{A}\boldsymbol{x} - \boldsymbol{x}^T \boldsymbol{x} , \quad \boldsymbol{x}\in\mathbf{R}^n.$$ This suggests taking the first basis vector $\boldsymbol{p}_1$ to be the gradient of $f$ at $\boldsymbol{x}=\boldsymbol{x}_0$, which equals $$\boldsymbol{A}\boldsymbol{x}_0-\boldsymbol{b},$$ and $\boldsymbol{x}_0=0$ it is equal $-\boldsymbol{b}$.The other vectors in the basis will be conjugate to the gradient, hence the name conjugate gradient method. Conjugate gradient methodLet $\boldsymbol{r}_k$ be the residual at the $k$-th step: $$\boldsymbol{r}_k=\boldsymbol{b}-\boldsymbol{A}\boldsymbol{x}_k.$$ Note that $\boldsymbol{r}_k$ is the negative gradient of $f$ at $\boldsymbol{x}=\boldsymbol{x}_k$, so the gradient descent method would be to move in the direction $\boldsymbol{r}_k$. Here, we insist that the directions $\boldsymbol{p}_k$ are conjugate to each other, so we take the direction closest to the gradient $\boldsymbol{r}_k$ under the conjugacy constraint. This gives the following expression $$\boldsymbol{p}_{k+1}=\boldsymbol{r}_k-\frac{\boldsymbol{p}_k^T \boldsymbol{A}\boldsymbol{r}_k}{\boldsymbol{p}_k^T\boldsymbol{A}\boldsymbol{p}_k} \boldsymbol{p}_k.$$ Conjugate gradient methodWe can also compute the residual iteratively as $$\boldsymbol{r}_{k+1}=\boldsymbol{b}-\boldsymbol{A}\boldsymbol{x}_{k+1},$$ which equals $$\boldsymbol{b}-\boldsymbol{A}(\boldsymbol{x}_k+\alpha_k\boldsymbol{p}_k),$$ or $$(\boldsymbol{b}-\boldsymbol{A}\boldsymbol{x}_k)-\alpha_k\boldsymbol{A}\boldsymbol{p}_k,$$ which gives $$\boldsymbol{r}_{k+1}=\boldsymbol{r}_k-\boldsymbol{A}\boldsymbol{p}_{k},$$ Revisiting our first homeworkWe will use linear regression as a case study for the gradient descentmethods. Linear regression is a great test case for the gradientdescent methods discussed in the lectures since it has severaldesirable properties such as:1. An analytical solution (recall homework set 1).2. The gradient can be computed analytically.3. The cost function is convex which guarantees that gradient descent converges for small enough learning ratesWe revisit an example similar to what we had in the first homework set. We had a function of the type
###Code
x = 2*np.random.rand(m,1)
y = 4+3*x+np.random.randn(m,1)
###Output
_____no_output_____
###Markdown
with $x_i \in [0,1] $ is chosen randomly using a uniform distribution. Additionally we have a stochastic noise chosen according to a normal distribution $\cal {N}(0,1)$. The linear regression model is given by $$h_\beta(x) = \boldsymbol{y} = \beta_0 + \beta_1 x,$$ such that $$\boldsymbol{y}_i = \beta_0 + \beta_1 x_i.$$ Gradient descent exampleLet $\mathbf{y} = (y_1,\cdots,y_n)^T$, $\mathbf{\boldsymbol{y}} = (\boldsymbol{y}_1,\cdots,\boldsymbol{y}_n)^T$ and $\beta = (\beta_0, \beta_1)^T$It is convenient to write $\mathbf{\boldsymbol{y}} = X\beta$ where $X \in \mathbb{R}^{100 \times 2} $ is the design matrix given by (we keep the intercept here) $$X \equiv \begin{bmatrix}1 & x_1 \\\vdots & \vdots \\1 & x_{100} & \\\end{bmatrix}.$$ The cost/loss/risk function is given by ( $$C(\beta) = \frac{1}{n}||X\beta-\mathbf{y}||_{2}^{2} = \frac{1}{n}\sum_{i=1}^{100}\left[ (\beta_0 + \beta_1 x_i)^2 - 2 y_i (\beta_0 + \beta_1 x_i) + y_i^2\right]$$ and we want to find $\beta$ such that $C(\beta)$ is minimized. The derivative of the cost/loss functionComputing $\partial C(\beta) / \partial \beta_0$ and $\partial C(\beta) / \partial \beta_1$ we can show that the gradient can be written as $$\nabla_{\beta} C(\beta) = \frac{2}{n}\begin{bmatrix} \sum_{i=1}^{100} \left(\beta_0+\beta_1x_i-y_i\right) \\\sum_{i=1}^{100}\left( x_i (\beta_0+\beta_1x_i)-y_ix_i\right) \\\end{bmatrix} = \frac{2}{n}X^T(X\beta - \mathbf{y}),$$ where $X$ is the design matrix defined above. The Hessian matrixThe Hessian matrix of $C(\beta)$ is given by $$\boldsymbol{H} \equiv \begin{bmatrix}\frac{\partial^2 C(\beta)}{\partial \beta_0^2} & \frac{\partial^2 C(\beta)}{\partial \beta_0 \partial \beta_1} \\\frac{\partial^2 C(\beta)}{\partial \beta_0 \partial \beta_1} & \frac{\partial^2 C(\beta)}{\partial \beta_1^2} & \\\end{bmatrix} = \frac{2}{n}X^T X.$$ This result implies that $C(\beta)$ is a convex function since the matrix $X^T X$ always is positive semi-definite. Simple programWe can now write a program that minimizes $C(\beta)$ using the gradient descent method with a constant learning rate $\gamma$ according to $$\beta_{k+1} = \beta_k - \gamma \nabla_\beta C(\beta_k), \ k=0,1,\cdots$$ We can use the expression we computed for the gradient and let use a$\beta_0$ be chosen randomly and let $\gamma = 0.001$. Stop iteratingwhen $||\nabla_\beta C(\beta_k) || \leq \epsilon = 10^{-8}$. **Note that the code below does not include the latter stop criterion**.And finally we can compare our solution for $\beta$ with the analytic result given by $\beta= (X^TX)^{-1} X^T \mathbf{y}$. Gradient Descent ExampleHere our simple example
###Code
# Importing various packages
from random import random, seed
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import sys
# the number of datapoints
n = 100
x = 2*np.random.rand(n,1)
y = 4+3*x+np.random.randn(n,1)
X = np.c_[np.ones((n,1)), x]
# Hessian matrix
H = (2.0/n)* X.T @ X
# Get the eigenvalues
EigValues, EigVectors = np.linalg.eig(H)
print("Eigenvalues of Hessian Matrix: ", EigValues)
beta_linreg = np.linalg.inv(X.T @ X) @ X.T @ y
print("Matrix inversion:", beta_linreg)
beta = np.random.randn(2,1)
eta = 1.0/np.max(EigValues) # 0.5 --> doesn't converge anymore or 0.0001 and decrese number of itrations, doesnt converge either
Niterations = 1000
for iter in range(Niterations):
gradient = (2.0/n)*X.T @ (X @ beta-y)
beta -= eta*gradient
print("Gradient Descent:", beta)
xnew = np.array([[0],[2]])
xbnew = np.c_[np.ones((2,1)), xnew]
ypredict = xbnew.dot(beta)
ypredict2 = xbnew.dot(beta_linreg)
plt.plot(xnew, ypredict, "r-")
plt.plot(xnew, ypredict2, "b-")
plt.plot(x, y ,'ro')
plt.axis([0,2.0,0, 15.0])
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title(r'Gradient descent example')
plt.show()
###Output
Eigenvalues of Hessian Matrix: [0.31449802 4.3087576 ]
Matrix inversion: [[4.2114462 ]
[2.71089321]]
Gradient Descent: [[4.2114462 ]
[2.71089321]]
###Markdown
And a corresponding example using **scikit-learn**
###Code
# Importing various packages
from random import random, seed
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import SGDRegressor
n = 100
x = 2*np.random.rand(n,1)
y = 4+3*x+np.random.randn(n,1)
X = np.c_[np.ones((n,1)), x]
beta_linreg = np.linalg.inv(X.T @ X) @ (X.T @ y)
print(beta_linreg)
sgdreg = SGDRegressor(max_iter = 50, penalty=None, eta0=0.1)
sgdreg.fit(x,y.ravel())
print(sgdreg.intercept_, sgdreg.coef_)
###Output
[[4.05265693]
[2.91394133]]
[4.09000858] [2.98637528]
###Markdown
Gradient descent and RidgeWe have also discussed Ridge regression where the loss function contains a regularized term given by the $L_2$ norm of $\beta$, $$C_{\text{ridge}}(\beta) = \frac{1}{n}||X\beta -\mathbf{y}||^2 + \lambda ||\beta||^2, \ \lambda \geq 0.$$ In order to minimize $C_{\text{ridge}}(\beta)$ using GD we only have adjust the gradient as follows $$\nabla_\beta C_{\text{ridge}}(\beta) = \frac{2}{n}\begin{bmatrix} \sum_{i=1}^{100} \left(\beta_0+\beta_1x_i-y_i\right) \\\sum_{i=1}^{100}\left( x_i (\beta_0+\beta_1x_i)-y_ix_i\right) \\\end{bmatrix} + 2\lambda\begin{bmatrix} \beta_0 \\ \beta_1\end{bmatrix} = 2 (X^T(X\beta - \mathbf{y})+\lambda \beta).$$ We can easily extend our program to minimize $C_{\text{ridge}}(\beta)$ using gradient descent and compare with the analytical solution given by $$\beta_{\text{ridge}} = \left(X^T X + \lambda I_{2 \times 2} \right)^{-1} X^T \mathbf{y}.$$ Program example for gradient descent with Ridge Regression
###Code
from random import random, seed
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import sys
# the number of datapoints
n = 100
x = 2*np.random.rand(n,1)
y = 4+3*x+np.random.randn(n,1)
X = np.c_[np.ones((n,1)), x]
XT_X = X.T @ X
#Ridge parameter lambda
lmbda = 0.001
Id = lmbda* np.eye(XT_X.shape[0])
beta_linreg = np.linalg.inv(XT_X+Id) @ X.T @ y
print(beta_linreg)
# Start plain gradient descent
beta = np.random.randn(2,1)
eta = 0.1
Niterations = 100
for iter in range(Niterations):
gradients = 2.0/n*X.T @ (X @ (beta)-y)+2*lmbda*beta
beta -= eta*gradients
print(beta)
ypredict = X @ beta
ypredict2 = X @ beta_linreg
plt.plot(x, ypredict, "r-")
plt.plot(x, ypredict2, "b-")
plt.plot(x, y ,'ro')
plt.axis([0,2.0,0, 15.0])
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title(r'Gradient descent example for Ridge')
plt.show()
###Output
[[3.91024633]
[3.04451996]]
[[3.82835477]
[3.11106185]]
###Markdown
Using gradient descent methods, limitations* **Gradient descent (GD) finds local minima of our function**. Since the GD algorithm is deterministic, if it converges, it will converge to a local minimum of our cost/loss/risk function. Because in ML we are often dealing with extremely rugged landscapes with many local minima, this can lead to poor performance.* **GD is sensitive to initial conditions**. One consequence of the local nature of GD is that initial conditions matter. Depending on where one starts, one will end up at a different local minima. Therefore, it is very important to think about how one initializes the training process. This is true for GD as well as more complicated variants of GD.* **Gradients are computationally expensive to calculate for large datasets**. In many cases in statistics and ML, the cost/loss/risk function is a sum of terms, with one term for each data point. For example, in linear regression, $E \propto \sum_{i=1}^n (y_i - \mathbf{w}^T\cdot\mathbf{x}_i)^2$; for logistic regression, the square error is replaced by the cross entropy. To calculate the gradient we have to sum over *all* $n$ data points. Doing this at every GD step becomes extremely computationally expensive. An ingenious solution to this, is to calculate the gradients using small subsets of the data called "mini batches". This has the added benefit of introducing stochasticity into our algorithm.* **GD is very sensitive to choices of learning rates**. GD is extremely sensitive to the choice of learning rates. If the learning rate is very small, the training process take an extremely long time. For larger learning rates, GD can diverge and give poor results. Furthermore, depending on what the local landscape looks like, we have to modify the learning rates to ensure convergence. Ideally, we would *adaptively* choose the learning rates to match the landscape.* **GD treats all directions in parameter space uniformly.** Another major drawback of GD is that unlike Newton's method, the learning rate for GD is the same in all directions in parameter space. For this reason, the maximum learning rate is set by the behavior of the steepest direction and this can significantly slow down training. Ideally, we would like to take large steps in flat directions and small steps in steep directions. Since we are exploring rugged landscapes where curvatures change, this requires us to keep track of not only the gradient but second derivatives. The ideal scenario would be to calculate the Hessian but this proves to be too computationally expensive. * GD can take exponential time to escape saddle points, even with random initialization. As we mentioned, GD is extremely sensitive to initial condition since it determines the particular local minimum GD would eventually reach. However, even with a good initialization scheme, through the introduction of randomness, GD can still take exponential time to escape saddle points. Friday October 1 Stochastic Gradient DescentStochastic gradient descent (SGD) and variants thereof address some ofthe shortcomings of the Gradient descent method discussed above.The underlying idea of SGD comes from the observation that the costfunction, which we want to minimize, can almost always be written as asum over $n$ data points $\{\mathbf{x}_i\}_{i=1}^n$, $$C(\mathbf{\beta}) = \sum_{i=1}^n c_i(\mathbf{x}_i,\mathbf{\beta}).$$ Computation of gradientsThis in turn means that the gradient can becomputed as a sum over $i$-gradients $$\nabla_\beta C(\mathbf{\beta}) = \sum_i^n \nabla_\beta c_i(\mathbf{x}_i,\mathbf{\beta}).$$ Stochasticity/randomness is introduced by only taking thegradient on a subset of the data called minibatches. If there are $n$data points and the size of each minibatch is $M$, there will be $n/M$minibatches. We denote these minibatches by $B_k$ where$k=1,\cdots,n/M$. SGD exampleAs an example, suppose we have $10$ data points $(\mathbf{x}_1,\cdots, \mathbf{x}_{10})$ and we choose to have $M=5$ minibathces,then each minibatch contains two data points. In particular we have$B_1 = (\mathbf{x}_1,\mathbf{x}_2), \cdots, B_5 =(\mathbf{x}_9,\mathbf{x}_{10})$. Note that if you choose $M=1$ youhave only a single batch with all data points and on the other extreme,you may choose $M=n$ resulting in a minibatch for each datapoint, i.e$B_k = \mathbf{x}_k$.The idea is now to approximate the gradient by replacing the sum overall data points with a sum over the data points in one the minibatchespicked at random in each gradient descent step $$\nabla_{\beta}C(\mathbf{\beta}) = \sum_{i=1}^n \nabla_\beta c_i(\mathbf{x}_i,\mathbf{\beta}) \rightarrow \sum_{i \in B_k}^n \nabla_\betac_i(\mathbf{x}_i, \mathbf{\beta}).$$ The gradient stepThus a gradient descent step now looks like $$\beta_{j+1} = \beta_j - \gamma_j \sum_{i \in B_k}^n \nabla_\beta c_i(\mathbf{x}_i,\mathbf{\beta})$$ where $k$ is picked at random with equalprobability from $[1,n/M]$. An iteration over the number ofminibathces (n/M) is commonly referred to as an epoch. Thus it istypical to choose a number of epochs and for each epoch iterate overthe number of minibatches, as exemplified in the code below. Simple example code
###Code
import numpy as np
n = 100 #100 datapoints
M = 5 #size of each minibatch
m = int(n/M) #number of minibatches
n_epochs = 10 #number of epochs
j = 0
for epoch in range(1,n_epochs+1):
for i in range(m):
k = np.random.randint(m) #Pick the k-th minibatch at random
#Compute the gradient using the data in minibatch Bk
#Compute new suggestion for
j += 1
###Output
_____no_output_____
###Markdown
Taking the gradient only on a subset of the data has two importantbenefits. 1. First, it **introduces randomness** which decreases the chance that our opmization scheme gets stuck in a **local minima. **2. Second, if the **size of the minibatches are small** relative to the number of datapoints ($M When do we stop?*A natural question is when do we stop the search for a new minimum?*1. One possibility is to compute the **full gradient after a given number of epochs** and check if the **norm of the gradient is smaller than some threshold** and stop if true. However, the condition that the gradient is zero is valid also for local minima, so this would only tell us that we are close to a local/global minimum. 2. However, we could also evaluate the **cost function** at this point, store the result and continue the search. If the test kicks in at a later stage we can **compare the values** of the cost function and keep the $\beta$ that gave the lowest value. Slightly different approach3. Another approach is to let the **step length $\eta_j$ depend on the number of epochs** in such a way that it **becomes very small after a reasonable time** such that we do not move at all.As an example, let $e = 0,1,2,3,\cdots$ denote the current epoch and let $t_0, t_1 > 0$ be two fixed numbers. Furthermore, let $t = e \cdot m + i$ where $m$ is the number of minibatches and $i=0,\cdots,m-1$. Then the function $$\eta_j(t; t_0, t_1) = \frac{t_0}{t+t_1} $$ goes to zero as the number of epochs gets large. I.e. we start with a step length $\eta_j (0; t_0, t_1) = t_0/t_1$ which decays in *time* $t$.**$\rightarrow$ In this way we can fix the number of epochs, compute $\beta$ andevaluate the cost function at the end. Repeating the computation willgive a different result since the scheme is random by design. Then wepick the final $\beta$ that gives the lowest value of the costfunction.**
###Code
import numpy as np
def step_length(t,t0,t1):
return t0/(t+t1)
n = 100 #100 datapoints
M = 5 #size of each minibatch
m = int(n/M) #number of minibatches
n_epochs = 500 #number of epochs
t0 = 1.0
t1 = 10
gamma_j = t0/t1
j = 0
for epoch in range(1,n_epochs+1):
for i in range(m):
k = np.random.randint(m) #Pick the k-th minibatch at random
#Compute the gradient using the data in minibatch Bk
#Compute new suggestion for beta
t = epoch*m+i
gamma_j = step_length(t,t0,t1)
j += 1
print("gamma_j after %d epochs: %g" % (n_epochs,gamma_j))
###Output
gamma_j after 500 epochs: 9.97108e-05
###Markdown
Program for stochastic gradient
###Code
# Importing various packages
from math import exp, sqrt
from random import random, seed
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import SGDRegressor
m = 100
x = 2*np.random.rand(m,1)
y = 4+3*x+np.random.randn(m,1)
X = np.c_[np.ones((m,1)), x]
# our own code
beta_linreg = np.linalg.inv(X.T @ X) @ (X.T @ y)
print("Own inversion: ", beta_linreg[0], beta_linreg[1])
# our own gradient discent
beta = np.random.randn(2,1)
eta = 0.1
Niterations = 1000
for iter in range(Niterations):
gradients = 2.0/m*X.T @ ((X @ beta)-y)
beta -= eta*gradients
print("beta from own gd: ", beta[0],beta[1])
xnew = np.array([[0],[2]])
Xnew = np.c_[np.ones((2,1)), xnew]
ypredict = Xnew.dot(beta)
ypredict2 = Xnew.dot(beta_linreg)
# our own stochastic gradient descent
n_epochs = 50
t0, t1 = 5, 50
def learning_schedule(t):
return t0/(t+t1)
beta = np.random.randn(2,1)
for epoch in range(n_epochs):
for i in range(m):
random_index = np.random.randint(m)
xi = X[random_index:random_index+1]
yi = y[random_index:random_index+1]
gradients = 2 * xi.T @ ((xi @ beta)-yi)
eta = learning_schedule(epoch*m+i)
beta = beta - eta*gradients
print("beta from own sdg: ", beta[0],beta[1])
# Sk: stochastic gradient descent
sgdreg = SGDRegressor(max_iter = 50, penalty=None, eta0=0.1)
sgdreg.fit(x,y.ravel())
print("sgdreg from scikit:", sgdreg.intercept_,sgdreg.coef_)
plt.plot(xnew, ypredict, "r-")
plt.plot(xnew, ypredict2, "b-")
plt.plot(x, y ,'ro')
plt.axis([0,2.0,0, 15.0])
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title(r'Random numbers ')
plt.show()
###Output
Own inversion: [4.01172064] [2.99266688]
beta from own gd: [4.01172064] [2.99266688]
beta from own sdg: [4.00331435] [2.95372898]
sgdreg from scikit: [4.0119161] [3.03951596]
|
notebooks/2-DataPreparation/2-CleanData/7-DB-REFACTORING-MINER.ipynb | ###Markdown
**REFACTORING_MINER**This notebook the cleaning of the attributes of the table `REFACTORING_MINER`.First, we import the libraries we need and, then, we read the corresponding csv.
###Code
import pandas as pd
import numpy as np
refactoringMiner = pd.read_csv("../../../data/interim/DataPreparation/SelectData/REFACTORING_MINER_select.csv")
print(refactoringMiner.shape)
refactoringMiner.head()
###Output
(57530, 4)
###Markdown
We define a function that returns, given two lists, their intersection.
###Code
def intersection(l1, l2):
temp = set(l2)
l3 = [value for value in l1 if value in temp]
return l3
###Output
_____no_output_____
###Markdown
Next, for each attribute, we treat the missing values. projectID
###Code
len(refactoringMiner.projectID.unique())
projectIDNan = list(np.where(refactoringMiner.projectID.isna()))[0]
len(projectIDNan)
###Output
_____no_output_____
###Markdown
commitHash
###Code
len(refactoringMiner.commitHash.unique())
commitHashNan = list(np.where(refactoringMiner.commitHash.isna()))[0]
len(commitHashNan)
###Output
_____no_output_____
###Markdown
refactoringType
###Code
len(refactoringMiner.refactoringType.unique())
refactoringTypeNan = list(np.where(refactoringMiner.refactoringType.isna()))[0]
len(refactoringTypeNan)
inters = intersection(commitHashNan, refactoringTypeNan)
len(inters)
###Output
_____no_output_____
###Markdown
---We remove these rows because they have 2 attributes with a missing value and we can not obtain this information. Finally we will have 57.528 rows.
###Code
refactoringMiner = refactoringMiner.drop(inters)
refactoringMiner.shape
###Output
_____no_output_____
###Markdown
We save it into a new csv.
###Code
refactoringMiner.to_csv('../../../data/interim/DataPreparation/CleanData/REFACTORING_MINER_clean.csv', header=True)
###Output
_____no_output_____ |
Week 7/LinAlg_58051_Gonzales_Matrices.ipynb | ###Markdown
TASK 1
###Code
import numpy as np
def mat_desc(matrix):
MShape = matrix.shape
MSize = matrix.size
MRank = matrix.ndim
IsSquare = MShape[0] == MShape[1]
print(f'Matrix:\n{matrix}\n\nShape:\t{MShape}\nSize:\t{MSize}\nRank:\t{MRank}\n')
print(f'Is Empty: {MSize == 0}\nIs Square: {IsSquare}\n')
IsIdentity = np.sum(np.identity(MShape[0]) == matrix) == MSize
total = np.sum(matrix)
IsOne = False
IsZero = False
if total == MSize:
IsOne = True
elif total == 0:
IsZero = True
print(f'Is Identity: {IsIdentity}\nIs One Matrix: {IsOne}\nIs Zero Matrix: {IsZero}\n')
A = np.array([[0, 3, 2, 4], [2, 0, 1, 4], [1, 3, 0, 3], [2, 4, 5, 1]])
B = np.array([[2, 9, 6],[0, 3, 5],[0, 0, 9], [2, 4, 6]])
C = np.array([[1, 0, 0, 0, 0],[0, 1, 0, 0, 0],[0, 0, 1, 0, 0],[0, 0, 0, 1, 0],[0, 0, 0, 0, 1]])
D = np.array([[0, 0, 0],[0, 0, 0],[0, 0, 0]])
E = np.array([[1, 1, 1, 1],[1, 1, 1, 1],[1, 1, 1, 1],[1, 1, 1, 1]])
matrices = [A, B, C, D, E]
for i in matrices:
mat_desc(i)
print('end of result'.center(40,'-'))
###Output
Matrix:
[[0 3 2 4]
[2 0 1 4]
[1 3 0 3]
[2 4 5 1]]
Shape: (4, 4)
Size: 16
Rank: 2
Is Empty: False
Is Square: True
Is Identity: False
Is One Matrix: False
Is Zero Matrix: False
-------------end of result--------------
Matrix:
[[2 9 6]
[0 3 5]
[0 0 9]
[2 4 6]]
Shape: (4, 3)
Size: 12
Rank: 2
Is Empty: False
Is Square: False
Is Identity: False
Is One Matrix: False
Is Zero Matrix: False
-------------end of result--------------
Matrix:
[[1 0 0 0 0]
[0 1 0 0 0]
[0 0 1 0 0]
[0 0 0 1 0]
[0 0 0 0 1]]
Shape: (5, 5)
Size: 25
Rank: 2
Is Empty: False
Is Square: True
Is Identity: True
Is One Matrix: False
Is Zero Matrix: False
-------------end of result--------------
Matrix:
[[0 0 0]
[0 0 0]
[0 0 0]]
Shape: (3, 3)
Size: 9
Rank: 2
Is Empty: False
Is Square: True
Is Identity: False
Is One Matrix: False
Is Zero Matrix: True
-------------end of result--------------
Matrix:
[[1 1 1 1]
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]]
Shape: (4, 4)
Size: 16
Rank: 2
Is Empty: False
Is Square: True
Is Identity: False
Is One Matrix: True
Is Zero Matrix: False
-------------end of result--------------
###Markdown
TASK 2
###Code
import numpy as np
class DimensionError(Exception):
pass
def mat_desc(matrix):
MShape = matrix.shape
MSize = matrix.size
MRank = matrix.ndim
IsSquare = MShape[0] == MShape[1]
print(f'Matrix:\n{matrix}\n\nShape:\t{MShape}\nSize:\t{MSize}\nRank:\t{MRank}\n')
print(f'Is Empty: {MSize == 0}\nIs Square: {IsSquare}\n')
IsIdentity = np.sum(np.identity(MShape[0]) == matrix) == MSize
total = np.sum(matrix)
IsOne = False
IsZero = False
if total == MSize:
IsOne = True
elif total == 0:
IsZero = True
print(f'Is Identity: {IsIdentity}\nIs One Matrix: {IsOne}\nIs Zero Matrix: {IsZero}\n')
def mat_operations(matrix1, matrix2):
return_bool = False
if not isinstance(matrix1, np.ndarray):
print(f'{matrix1} is not a matrix.')
return_bool = True
if not isinstance(matrix2, np.ndarray):
print(f'{matrix2} is not a matrix.')
return_bool = True
if return_bool == True:
return ''
mat_desc(matrix1)
print('end for Matrix 1'.center(40,'-'))
mat_desc(matrix2)
print('end for Matrix 2'.center(40,'-'))
if matrix1.shape != matrix2.shape:
raise DimensionError("The two matrices input have different dimensions")
sum = matrix1 + matrix2
diff = matrix1 - matrix2
mult = matrix1 * matrix2
divi = matrix1 / matrix2
return [sum, diff, mult, divi]
A = np.array([[0, 3, 2, 4], [2, 0, 1, 4], [1, 3, 0, 3], [2, 4, 5, 1]])
B = np.array([[2, 9, 6],[0, 3, 5],[0, 0, 9], [2, 4, 6]])
C = np.array([[1, 0, 0, 0, 0],[0, 1, 0, 0, 0],[0, 0, 1, 0, 0],[0, 0, 0, 1, 0],[0, 0, 0, 0, 1]])
D = np.array([[0, 0, 0],[0, 0, 0],[0, 0, 0]])
E = np.array([[1, 1, 1, 1],[1, 1, 1, 1],[1, 1, 1, 1],[1, 1, 1, 1]])
print(mat_operations(A, E))
###Output
Matrix:
[[0 3 2 4]
[2 0 1 4]
[1 3 0 3]
[2 4 5 1]]
Shape: (4, 4)
Size: 16
Rank: 2
Is Empty: False
Is Square: True
Is Identity: False
Is One Matrix: False
Is Zero Matrix: False
------------end for Matrix 1------------
Matrix:
[[1 1 1 1]
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]]
Shape: (4, 4)
Size: 16
Rank: 2
Is Empty: False
Is Square: True
Is Identity: False
Is One Matrix: True
Is Zero Matrix: False
------------end for Matrix 2------------
[array([[1, 4, 3, 5],
[3, 1, 2, 5],
[2, 4, 1, 4],
[3, 5, 6, 2]]), array([[-1, 2, 1, 3],
[ 1, -1, 0, 3],
[ 0, 2, -1, 2],
[ 1, 3, 4, 0]]), array([[0, 3, 2, 4],
[2, 0, 1, 4],
[1, 3, 0, 3],
[2, 4, 5, 1]]), array([[0., 3., 2., 4.],
[2., 0., 1., 4.],
[1., 3., 0., 3.],
[2., 4., 5., 1.]])]
|
tutorials/FITS-cubes/FITS-cubes.ipynb | ###Markdown
Working with FITS-cubes Authors[Dhanesh Krishnarao (DK)](http://www.astronomy.dk), [Shravan Shetty](http://www.astro.wisc.edu/our-people/post-doctoral-students/shetty-shravan/), [Diego Gonzalez-Casanova](http://www.astro.wisc.edu/our-people/graduate-students/gonzalez-casanova-diego/), [Audra Hernandez](http://www.astro.wisc.edu/our-people/scientists/hernandez-audra/), Kris Stern, Kelle Cruz, Stephanie Douglas Learning Goals* Find and download data using `astroquery`* Read and plot slices across different dimensions of a data cube* Compare different data sets (2D and 3D) by overploting contours* Transform coordinate projections and match data resolutions with `reproject`* Create intensity moment maps / velocity maps with `spectral_cube` KeywordsFITS, image manipulation, data cubes, radio astronomy, WCS, astroquery, reproject, spectral cube, matplotlib, contour plots, colorbar SummaryIn this tutorial we will visualize 2D and 3D data sets in Galactic and equatorial coordinates. The tutorial will walk you though a visual analysis of the Small Magellanic Cloud (SMC) using HI 21cm emission and a Herschel 250 micron map. We will learn how to read in data from a file, query and download matching data from Herschel using astroquery, and plot the resulting images in a multitude of ways. The primary libraries we'll be using are: [astroquery](http://www.astropy.org/astroquery/), [spectral_cube](https://spectral-cube.readthedocs.io/en/latest/), [reproject](https://reproject.readthedocs.io/en/stable/), [matplotlib](https://matplotlib.org/)) They can be installed using conda: ```conda install -c conda-forge astroqueryconda install -c conda-forge spectral-cubeconda install -c conda-forge reproject``` Alternatively, if you don't use conda, you can use pip.
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
import astropy.units as u
from astropy.utils.data import download_file
from astropy.io import fits # We use fits to open the actual data file
from astropy.utils import data
data.conf.remote_timeout = 60
from spectral_cube import SpectralCube
from astroquery.esasky import ESASky
from astroquery.utils import TableList
from astropy.wcs import WCS
from reproject import reproject_interp
###Output
_____no_output_____
###Markdown
Download the HI DataWe'll be using HI 21 cm emission data from the [HI4Pi survey](http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1610.06175). We want to look at neutral gas emission from the Magellanic Clouds and learn about the kinematics of the system and column densities. Using the VizieR catalog, we've found a relevant data cube to use that covers this region of the sky. You can also download an allsky data cube, but this is a very large file, so picking out sub-sections can be useful!For us, the [relevant file is available via ftp from CDS Strasbourg](http://cdsarc.u-strasbg.fr/vizier/ftp/cats/J/A+A/594/A116/CUBES/GAL/TAN/TAN_C14.fits). We have a reduced version of it which will be a FITS data cube in Galactic coordinates using the tangential sky projection.Sure, we could download this file directly, but why do that when we can load it up via one line of code and have it ready to use in our cache? Download the HI Fits Cube
###Code
# Downloads the HI data in a fits file format
hi_datafile = download_file(
'http://data.astropy.org/tutorials/FITS-cubes/reduced_TAN_C14.fits',
cache=True, show_progress=True)
###Output
_____no_output_____
###Markdown
Awesome, so now we have a copy of the data file (a FITS file). So how do we do anything with it?Luckily for us, the [spectral_cube](https://spectral-cube.readthedocs.io/en/latest/) package does a lot of the nitty gritty work for us to manipulate this data and even quickly look through it. So let's open up our data file and read in the data as a SpectralCube!The variable `cube` has the data using SpectralCube and `hi_data` is the data cube from the FITS file without the special formating from SpectralCube.
###Code
hi_data = fits.open(hi_datafile) # Open the FITS file for reading
cube = SpectralCube.read(hi_data) # Initiate a SpectralCube
hi_data.close() # Close the FITS file - we already read it in and don't need it anymore!
###Output
_____no_output_____
###Markdown
If you happen to already have the FITS file on your system, you can also skip the fits.open step and just directly read a FITS file with SpectralCube like this:`cube = SpectralCube.read('path_to_data_file/TAN_C14.fits') `So what does this SpectralCube object actually look like? Let's find out! The first check is to print out the cube.
###Code
print(cube)
###Output
_____no_output_____
###Markdown
Some things to pay attention to here:A data cube has three axes. In this case, there is Galactic Longitude (x), Galactic Latitude (y), and a spectral axis in terms of a LSR Velocity (z - listed as s with `spectral_cube`).The data hidden in the cube lives as an ndarray with shape (n_s, n_y, n_x) so that axis 0 corresponds with the Spectral Axis, axis 1 corresponds with the Galactic Latitude Axis, and axis 2 corresponds with the Galactic Longitude Axis. When we `print(cube)` we can see the shape, size, and units of all axes as well as the data stored in the cube. With this cube, the units of the data in the cube are temperatures (K). The spatial axes are in degrees and the Spectral Axis is in (meters / second).The cube also contains information about the coordinates corresponding to the data in the form of a WCS (World Coordinate System) object. SpectralCube is clever and keeps all the data masked until you really need it so that you can work with large sets of data. So let's see what our data actually looks like!SpectralCube has a `quicklook()` method which can give a handy sneak-peek preview of the data. It's useful when you just need to glance at a slice or spectrum without knowing any other information (say, to make sure the data isn't corrupted or is looking at the right region.) To do this, we index our cube along one axis (for a slice) or two axes (for a spectrum):
###Code
cube[300, :, :].quicklook() # Slice the cube along the spectral axis, and display a quick image
cube[:, 75, 75].quicklook() # Extract a single spectrum through the data cube
###Output
_____no_output_____
###Markdown
Try messing around with slicing the cube along different axes, or picking out different spectra Make a smaller cube, focusing on the Magellanic CloudsThe HI data cube we downloaded is bigger than we actually need it to be. Let's try zooming in on just the part we need and make a new `sub_cube`. The easiest way to do this is to cut out part of the cube with indices or coordinates.We can extract the world coordinates from the cube using the `.world()` method. Warning: using .world() will extract coordinates from every position you ask for. This can be a TON of data if you don't slice through the cube. One work around is to slice along two axes and extract coordinates just along a single dimension. The output of `.world()` is an Astropy `Quantity` representing the pixel coordinates, which includes units. You can extract these Astropy `Quantity` objects by slicing the data.
###Code
_, b, _ = cube.world[0, :, 0] #extract latitude world coordinates from cube
_, _, l = cube.world[0, 0, :] #extract longitude world coordinates from cube
###Output
_____no_output_____
###Markdown
You can then extract a `sub_cube` in the spatial coordinates of the cube
###Code
# Define desired latitude and longitude range
lat_range = [-46, -40] * u.deg
lon_range = [306, 295] * u.deg
# Create a sub_cube cut to these coordinates
sub_cube = cube.subcube(xlo=lon_range[0], xhi=lon_range[1], ylo=lat_range[0], yhi=lat_range[1])
print(sub_cube)
###Output
_____no_output_____
###Markdown
Cut along the Spectral Axis:We don't really need data from such a large velocity range so let's just extract a little slab. We can do this in any units that we want using the `.spectral_slab()` method.
###Code
sub_cube_slab = sub_cube.spectral_slab(-300. *u.km / u.s, 300. *u.km / u.s)
print(sub_cube_slab)
###Output
_____no_output_____
###Markdown
Moment MapsMoment maps are a useful analysis tool to study data cubes. In short, a moment is a weighted integral along an axis (typically the Spectral Axis) that can give information about the total Intensity (or column density), mean velocity, or velocity dispersion along lines of sight. SpectralCube makes this very simple with the `.moment()` method. We can convert to friendlier spectral units of km/s and these new 2D projections can be saved as new FITS files, complete with modified WCS information as well.
###Code
moment_0 = sub_cube_slab.with_spectral_unit(u.km/u.s).moment(order=0) # Zero-th moment
moment_1 = sub_cube_slab.with_spectral_unit(u.km/u.s).moment(order=1) # First moment
# Write the moments as a FITS image
# moment_0.write('hi_moment_0.fits')
# moment_1.write('hi_moment_1.fits')
print('Moment_0 has units of: ', moment_0.unit)
print('Moment_1 has units of: ', moment_1.unit)
# Convert Moment_0 to a Column Density assuming optically thin media
hi_column_density = moment_0 * 1.82 * 10**18 / (u.cm * u.cm) * u.s / u.K / u.km
###Output
_____no_output_____
###Markdown
Display the Moment MapsThe [WCSAxes](http://docs.astropy.org/en/stable/visualization/wcsaxes/) framework in Astropy allows us to display images with different coordinate axes and projections.As long as we have a WCS object associated with the data, we can transfer that projection to a matplotlib axis. SpectralCube makes it possible to access just the WCS object associated with a cube object.
###Code
print(moment_1.wcs) # Examine the WCS object associated with the moment map
###Output
_____no_output_____
###Markdown
As expected, the first moment image we created only has two axes (Galactic Longitude and Galactic Latitude). We can pass in this WCS object directly into a matplotlib axis instance.
###Code
# Initiate a figure and axis object with WCS projection information
fig = plt.figure(figsize=(18, 12))
ax = fig.add_subplot(111, projection=moment_1.wcs)
# Display the moment map image
im = ax.imshow(moment_1.hdu.data, cmap='RdBu_r', vmin=0, vmax=200)
ax.invert_yaxis() # Flips the Y axis
# Add axes labels
ax.set_xlabel("Galactic Longitude (degrees)", fontsize=16)
ax.set_ylabel("Galactic Latitude (degrees)", fontsize=16)
# Add a colorbar
cbar = plt.colorbar(im, pad=.07)
cbar.set_label('Velocity (km/s)', size=16)
# Overlay set of RA/Dec Axes
overlay = ax.get_coords_overlay('fk5')
overlay.grid(color='white', ls='dotted', lw=2)
overlay[0].set_axislabel('Right Ascension (J2000)', fontsize=16)
overlay[1].set_axislabel('Declination (J2000)', fontsize=16)
# Overplot column density contours
levels = (1e20, 5e20, 1e21, 3e21, 5e21, 7e21, 1e22) # Define contour levels to use
ax.contour(hi_column_density.hdu.data, cmap='Greys_r', alpha=0.5, levels=levels)
###Output
_____no_output_____
###Markdown
As you can see, the WCSAxes framework is very powerful and similiar to making any matplotlib style plot. Display a Longitude-Velocity SliceThe [WCSAxes](http://docs.astropy.org/en/stable/visualization/wcsaxes/) framework in Astropy also lets us slice the data accross different dimensions. It is often useful to slice along a single latitude and display an image showing longtitude and velocity information only (position-velocity or longitude-velocity diagram).This can be done by specifying the `slices` keyword and selecting the appropriate slice through the data. `slices` requires a 3D tuple containing the index to be sliced along and where we want the two axes to be displayed. This should be specified in the same order as the WCS object (longitude, latitude, velocity) as opposed to the order of numpy array holding the data (velocity, latitude, longitude). We then select the appropriate data by indexing along the numpy array.
###Code
lat_slice = 18 # Index of latitude dimension to slice along
# Initiate a figure and axis object with WCS projection information
fig = plt.figure(figsize=(18, 12))
ax = fig.add_subplot(111, projection=sub_cube_slab.wcs, slices=('y', lat_slice, 'x'))
# Above, we have specified to plot the longitude along the y axis, pick only the lat_slice
# indicated, and plot the velocity along the x axis
# Display the slice
im = ax.imshow(sub_cube_slab[:, lat_slice, :].transpose().data) # Display the image slice
ax.invert_yaxis() # Flips the Y axis
# Add axes labels
ax.set_xlabel("LSR Velocity (m/s)", fontsize=16)
ax.set_ylabel("Galactic Longitude (degrees)", fontsize=16)
# Add a colorbar
cbar = plt.colorbar(im, pad=.07, orientation='horizontal')
cbar.set_label('Temperature (K)', size=16)
###Output
_____no_output_____
###Markdown
As we can see, the SMC seems to be only along positive velocities. Try: Create a new spectral slab isolating just the SMC and slice along a different dimension to create a latitude-velocity diagram Find and Download a Herschel ImageThis is great, but we want to compare the HI emission data with Herschel 350 micron emission to trace some dust. This can be done with [astroquery](http://www.astropy.org/astroquery/). We can query for the data by mission, take a quick look at the table of results, and download data after selecting a specific wavelength or filter. Since we are looking for Herschel data from an ESA mission, we will use the [astroquery.ESASky](http://astroquery.readthedocs.io/en/latest/esasky/esasky.html) class.Specifically, the `ESASKY.query_region_maps()` method allows us to search for a specific region of the sky either using an Astropy SkyCoord object or a string specifying an object name. In this case, we can just search for the SMC. A radius to search around the object can also be specified.
###Code
# Query for Herschel data in a 1 degree radius around the SMC
result = ESASky.query_region_maps('SMC', radius=1*u.deg, missions='Herschel')
print(result)
###Output
_____no_output_____
###Markdown
Here, the result is a TableList which contains 24 Herschel data products that can be downloaded. We can see what information is available in this TableList by examining the keys in the Herschel Table.
###Code
result['HERSCHEL'].keys()
###Output
_____no_output_____
###Markdown
We want to find a 350 micron image, so we need to look closer at the filters used for these observations.
###Code
result['HERSCHEL']['filter']
###Output
_____no_output_____
###Markdown
Luckily for us, there is an observation made with three filters: 250, 350, and 500 microns. This is the object we will want to download. One way to do this is by making a boolean mask to select out the Table entry corresponding with the desired filter. Then, the `ESASky.get_maps()` method will download our data provided a TableList argument. Note that the below command requires an internet connection to download five quite large files. It could take several minutes to complete.
###Code
filters = result['HERSCHEL']['filter'].astype(str) # Convert the list of filters from the query to a string
# Construct a boolean mask, searching for only the desired filters
mask = np.array(['250, 350, 500' == s for s in filters], dtype='bool')
# Re-construct a new TableList object containing only our desired query entry
target_obs = TableList({"HERSCHEL":result['HERSCHEL'][mask]}) # This will be passed into ESASky.get_maps()
IR_images = ESASky.get_maps(target_obs) # Download the images
IR_images['HERSCHEL'][0]['350'].info() # Display some information about the 350 micron image
###Output
_____no_output_____
###Markdown
Since we are just doing some qualitative analysis, we only need the image, but you can also access lots of other information from our downloaded object, such as errors. Let's go ahead and extract just the WCS information and image data from the 350 micron image.
###Code
herschel_header = IR_images['HERSCHEL'][0]['350']['image'].header
herschel_wcs = WCS(IR_images['HERSCHEL'][0]['350']['image']) # Extract WCS information
herschel_imagehdu = IR_images['HERSCHEL'][0]['350']['image'] # Extract Image data
print(herschel_wcs)
###Output
_____no_output_____
###Markdown
With this, we can display this image using matplotlib with [WCSAxes](http://docs.astropy.org/en/stable/visualization/wcsaxes/index.html) and the `LogNorm()` object so we can log scale our image.
###Code
# Set Nans to zero
himage_nan_locs = np.isnan(herschel_imagehdu.data)
herschel_data_nonans = herschel_imagehdu.data
herschel_data_nonans[himage_nan_locs] = 0
# Initiate a figure and axis object with WCS projection information
fig = plt.figure(figsize=(18, 12))
ax = fig.add_subplot(111, projection=herschel_wcs)
# Display the moment map image
im = ax.imshow(herschel_data_nonans, cmap='viridis',
norm=LogNorm(vmin=2, vmax=50))
# ax.invert_yaxis() # Flips the Y axis
# Add axes labels
ax.set_xlabel("Right Ascension", fontsize = 16)
ax.set_ylabel("Declination", fontsize = 16)
ax.grid(color = 'white', ls = 'dotted', lw = 2)
# Add a colorbar
cbar = plt.colorbar(im, pad=.07)
cbar.set_label(''.join(['Herschel 350'r'$\mu$m ','(', herschel_header['BUNIT'], ')']), size = 16)
# Overlay set of Galactic Coordinate Axes
overlay = ax.get_coords_overlay('galactic')
overlay.grid(color='black', ls='dotted', lw=1)
overlay[0].set_axislabel('Galactic Longitude', fontsize=14)
overlay[1].set_axislabel('Galactic Latitude', fontsize=14)
###Output
_____no_output_____
###Markdown
Overlay HI 21 cm Contours on the IR 30 micron ImageTo visually compare the neutral gas and dust as traced by HI 21 cm emission and IR 30 micron emission, we can use contours and colorscale images produced using the [WCSAxes](http://docs.astropy.org/en/stable/visualization/wcsaxes/index.html) framework and the `.get_transform()` method. The [WCSAxes.get_transform()](http://docs.astropy.org/en/stable/api/astropy.visualization.wcsaxes.WCSAxes.htmlastropy.visualization.wcsaxes.WCSAxes.get_transform) method returns a transformation from a specified frame to the pixel/data coordinates. It accepts a string specifying the frame or a WCS object.
###Code
# Initiate a figure and axis object with WCS projection information
fig = plt.figure(figsize=(18, 12))
ax = fig.add_subplot(111, projection=herschel_wcs)
# Display the moment map image
im = ax.imshow(herschel_data_nonans, cmap='viridis',
norm=LogNorm(vmin=5, vmax=50), alpha=.8)
# ax.invert_yaxis() # Flips the Y axis
# Add axes labels
ax.set_xlabel("Right Ascension", fontsize=16)
ax.set_ylabel("Declination", fontsize=16)
ax.grid(color = 'white', ls='dotted', lw=2)
# Extract x and y coordinate limits
x_lim = ax.get_xlim()
y_lim = ax.get_ylim()
# Add a colorbar
cbar = plt.colorbar(im, fraction=0.046, pad=-0.1)
cbar.set_label(''.join(['Herschel 350'r'$\mu$m ','(', herschel_header['BUNIT'], ')']), size=16)
# Overlay set of RA/Dec Axes
overlay = ax.get_coords_overlay('galactic')
overlay.grid(color='black', ls='dotted', lw=1)
overlay[0].set_axislabel('Galactic Longitude', fontsize=14)
overlay[1].set_axislabel('Galactic Latitude', fontsize=14)
hi_transform = ax.get_transform(hi_column_density.wcs) # extract axes Transform information for the HI data
# Overplot column density contours
levels = (2e21, 3e21, 5e21, 7e21, 8e21, 1e22) # Define contour levels to use
ax.contour(hi_column_density.hdu.data, cmap='Greys_r', alpha=0.8, levels=levels,
transform=hi_transform) # include the transform information with the keyword "transform"
# Overplot velocity image so we can also see the Gas velocities
im_hi = ax.imshow(moment_1.hdu.data, cmap='RdBu_r', vmin=0, vmax=200, alpha=0.5, transform=hi_transform)
# Add a second colorbar for the HI Velocity information
cbar_hi = plt.colorbar(im_hi, orientation='horizontal', fraction=0.046, pad=0.07)
cbar_hi.set_label('HI 'r'$21$cm Mean Velocity (km/s)', size=16)
# Apply original image x and y coordinate limits
ax.set_xlim(x_lim)
ax.set_ylim(y_lim)
###Output
_____no_output_____
###Markdown
Using reproject to match image resolutionsThe [reproject](https://reproject.readthedocs.io/en/stable/) package is a powerful tool allowing for image data to be transformed into a variety of projections and resolutions. It's most powerful use is in fact to transform data from one map projection to another without losing any information and still properly conserving flux values within the data. It even has a method to perform a fast reprojection if you are not too concerned with the absolute accuracy of the data values. A simple use of the reproject package is to scale down (or up) resolutions of an image artificially. This could be a useful step if you are trying to get emission line ratios or directly compare the Intensity or Flux from a tracer to that of another tracer in the same physical point of the sky. From our previously made images, we can see that the IR Herschel Image has a higher spatial resolution than that of the HI data cube. We can look into this more by taking a better look at both header objects and using reproject to downscale the Herschel Image.
###Code
print('IR Resolution (dx,dy) = ', herschel_header['cdelt1'], herschel_header['cdelt2'])
print('HI Resolution (dx,dy) = ', hi_column_density.hdu.header['cdelt1'], hi_column_density.hdu.header['cdelt1'])
###Output
_____no_output_____
###Markdown
Note: Different ways of accessing the header are shown above corresponding to the different object types (coming from SpectralCube vs astropy.io.fits) As we can see, the IR data has over 10 times higher spatial resolution. In order to create a new projection of an image, all we need to specifiy is a new header containing WCS information to transform into. These can be created manually if you wanted to completely change something about the projection type (i.e. going from a Mercator map projection to a Tangential map projection). For us, since we want to match our resolutions, we can just "steal" the WCS object from the HI data. Specifically, we will be using the [reproject_interp()](https://reproject.readthedocs.io/en/stable/api/reproject.reproject_interp.htmlreproject.reproject_interp) function. This takes two arguments: an HDU object that you want to reproject, and a header containing WCS information to reproject onto.
###Code
rescaled_herschel_data, _ = reproject_interp(herschel_imagehdu,
# reproject the Herschal image to match the HI data
hi_column_density.hdu.header)
rescaled_herschel_imagehdu = fits.PrimaryHDU(data = rescaled_herschel_data,
# wrap up our reprojection as a new fits HDU object
header = hi_column_density.hdu.header)
###Output
_____no_output_____
###Markdown
`rescaled_herschel_imagehdu` will now behave just like the other FITS images we have been working with, but now with a degraded resolution matching the HI data. This includes having its native coordinates in Galactic rather than RA and Dec.
###Code
# Set Nans to zero
image_nan_locs = np.isnan(rescaled_herschel_imagehdu.data)
rescaled_herschel_data_nonans = rescaled_herschel_imagehdu.data
rescaled_herschel_data_nonans[image_nan_locs] = 0
# Initiate a figure and axis object with WCS projection information
fig = plt.figure(figsize = (18,12))
ax = fig.add_subplot(111,projection = WCS(rescaled_herschel_imagehdu))
# Display the moment map image
im = ax.imshow(rescaled_herschel_data_nonans, cmap = 'viridis',
norm = LogNorm(vmin=5, vmax=50), alpha = .8)
#im = ax.imshow(rescaled_herschel_imagehdu.data, cmap = 'viridis',
# norm = LogNorm(), vmin = 5, vmax = 50, alpha = .8)
ax.invert_yaxis() # Flips the Y axis
# Add axes labels
ax.set_xlabel("Galactic Longitude", fontsize = 16)
ax.set_ylabel("Galactic Latitude", fontsize = 16)
ax.grid(color = 'white', ls = 'dotted', lw = 2)
# Extract x and y coordinate limits
x_lim = ax.get_xlim()
y_lim = ax.get_ylim()
# Add a colorbar
cbar = plt.colorbar(im, fraction=0.046, pad=-0.1)
cbar.set_label(''.join(['Herschel 350'r'$\mu$m ','(', herschel_header['BUNIT'], ')']), size = 16)
# Overlay set of RA/Dec Axes
overlay = ax.get_coords_overlay('fk5')
overlay.grid(color='black', ls='dotted', lw = 1)
overlay[0].set_axislabel('Right Ascension', fontsize = 14)
overlay[1].set_axislabel('Declination', fontsize = 14)
hi_transform = ax.get_transform(hi_column_density.wcs) # extract axes Transform information for the HI data
# Overplot column density contours
levels = (2e21, 3e21, 5e21, 7e21, 8e21, 1e22) # Define contour levels to use
ax.contour(hi_column_density.hdu.data, cmap = 'Greys_r', alpha = 0.8, levels = levels,
transform = hi_transform) # include the transform information with the keyword "transform"
# Overplot velocity image so we can also see the Gas velocities
im_hi = ax.imshow(moment_1.hdu.data, cmap = 'RdBu_r', vmin = 0, vmax = 200, alpha = 0.5, transform = hi_transform)
# Add a second colorbar for the HI Velocity information
cbar_hi = plt.colorbar(im_hi, orientation = 'horizontal', fraction=0.046, pad=0.07)
cbar_hi.set_label('HI 'r'$21$cm Mean Velocity (km/s)', size = 16)
# Apply original image x and y coordinate limits
ax.set_xlim(x_lim)
ax.set_ylim(y_lim)
###Output
_____no_output_____
###Markdown
Working with FITS-cubes Authors[Dhanesh Krishnarao (DK)](http://www.astronomy.dk), [Shravan Shetty](http://www.astro.wisc.edu/our-people/post-doctoral-students/shetty-shravan/), [Diego Gonzalez-Casanova](http://www.astro.wisc.edu/our-people/graduate-students/gonzalez-casanova-diego/), [Audra Hernandez](http://www.astro.wisc.edu/our-people/scientists/hernandez-audra/), Kris Stern, Kelle Cruz, Stephanie Douglas Learning Goals* Find and download data using `astroquery`* Read and plot slices across different dimensions of a data cube* Compare different data sets (2D and 3D) by overploting contours* Transform coordinate projections and match data resolutions with `reproject`* Create intensity moment maps / velocity maps with `spectral_cube` KeywordsFITS, image manipulation, data cubes, radio astronomy, WCS, astroquery, reproject, spectral cube, matplotlib, contour plots, colorbar SummaryIn this tutorial we will visualize 2D and 3D data sets in Galactic and equatorial coordinates. The tutorial will walk you though a visual analysis of the Small Magellanic Cloud (SMC) using HI 21cm emission and a Herschel 250 micron map. We will learn how to read in data from a file, query and download matching data from Herschel using astroquery, and plot the resulting images in a multitude of ways. The primary libraries we'll be using are: [astroquery](http://www.astropy.org/astroquery/), [spectral_cube](https://spectral-cube.readthedocs.io/en/latest/), [reproject](https://reproject.readthedocs.io/en/stable/), [matplotlib](https://matplotlib.org/)) They can be installed using conda: ```conda install -c conda-forge astroqueryconda install -c conda-forge spectral-cubeconda install -c conda-forge reproject``` Alternatively, if you don't use conda, you can use pip.
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
import astropy.units as u
from astropy.utils.data import download_file
from astropy.io import fits # We use fits to open the actual data file
from astropy.utils import data
data.conf.remote_timeout = 60
from spectral_cube import SpectralCube
from astroquery.esasky import ESASky
from astroquery.utils import TableList
from astropy.wcs import WCS
from reproject import reproject_interp
###Output
_____no_output_____
###Markdown
Download the HI DataWe'll be using HI 21 cm emission data from the [HI4Pi survey](http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1610.06175). We want to look at neutral gas emission from the Magellanic Clouds and learn about the kinematics of the system and column densities. Using the VizieR catalog, we've found a relevant data cube to use that covers this region of the sky. You can also download an allsky data cube, but this is a very large file, so picking out sub-sections can be useful!For us, the [relevant file is available via ftp from CDS Strasbourg](http://cdsarc.u-strasbg.fr/vizier/ftp/cats/J/A+A/594/A116/CUBES/GAL/TAN/TAN_C14.fits). We have a reduced version of it which will be a FITS data cube in Galactic coordinates using the tangential sky projection.Sure, we could download this file directly, but why do that when we can load it up via one line of code and have it ready to use in our cache? Download the HI Fits Cube
###Code
# Downloads the HI data in a fits file format
hi_datafile = download_file(
'http://data.astropy.org/tutorials/FITS-cubes/reduced_TAN_C14.fits',
cache=True, show_progress=True)
###Output
_____no_output_____
###Markdown
Awesome, so now we have a copy of the data file (a FITS file). So how do we do anything with it?Luckily for us, the [spectral_cube](https://spectral-cube.readthedocs.io/en/latest/) package does a lot of the nitty gritty work for us to manipulate this data and even quickly look through it. So let's open up our data file and read in the data as a SpectralCube!The variable `cube` has the data using SpectralCube and `hi_data` is the data cube from the FITS file without the special formating from SpectralCube.
###Code
hi_data = fits.open(hi_datafile) # Open the FITS file for reading
cube = SpectralCube.read(hi_data) # Initiate a SpectralCube
hi_data.close() # Close the FITS file - we already read it in and don't need it anymore!
###Output
_____no_output_____
###Markdown
If you happen to already have the FITS file on your system, you can also skip the fits.open step and just directly read a FITS file with SpectralCube like this:`cube = SpectralCube.read('path_to_data_file/TAN_C14.fits') `So what does this SpectralCube object actually look like? Let's find out! The first check is to print out the cube.
###Code
print(cube)
###Output
_____no_output_____
###Markdown
Some things to pay attention to here:A data cube has three axes. In this case, there is Galactic Longitude (x), Galactic Latitude (y), and a spectral axis in terms of a LSR Velocity (z - listed as s with `spectral_cube`).The data hidden in the cube lives as an ndarray with shape (n_s, n_y, n_x) so that axis 0 corresponds with the Spectral Axis, axis 1 corresponds with the Galactic Latitude Axis, and axis 2 corresponds with the Galactic Longitude Axis. When we `print(cube)` we can see the shape, size, and units of all axes as well as the data stored in the cube. With this cube, the units of the data in the cube are temperatures (K). The spatial axes are in degrees and the Spectral Axis is in (meters / second).The cube also contains information about the coordinates corresponding to the data in the form of a WCS (World Coordinate System) object. SpectralCube is clever and keeps all the data masked until you really need it so that you can work with large sets of data. So let's see what our data actually looks like!SpectralCube has a `quicklook()` method which can give a handy sneak-peek preview of the data. It's useful when you just need to glance at a slice or spectrum without knowing any other information (say, to make sure the data isn't corrupted or is looking at the right region.) To do this, we index our cube along one axis (for a slice) or two axes (for a spectrum):
###Code
cube[300, :, :].quicklook() # Slice the cube along the spectral axis, and display a quick image
cube[:, 75, 75].quicklook() # Extract a single spectrum through the data cube
###Output
_____no_output_____
###Markdown
Try messing around with slicing the cube along different axes, or picking out different spectra Make a smaller cube, focusing on the Magellanic CloudsThe HI data cube we downloaded is bigger than we actually need it to be. Let's try zooming in on just the part we need and make a new `sub_cube`. The easiest way to do this is to cut out part of the cube with indices or coordinates.We can extract the world coordinates from the cube using the `.world()` method. Warning: using .world() will extract coordinates from every position you ask for. This can be a TON of data if you don't slice through the cube. One work around is to slice along two axes and extract coordinates just along a single dimension. The output of `.world()` is an Astropy `Quantity` representing the pixel coordinates, which includes units. You can extract these Astropy `Quantity` objects by slicing the data.
###Code
_, b, _ = cube.world[0, :, 0] #extract latitude world coordinates from cube
_, _, l = cube.world[0, 0, :] #extract longitude world coordinates from cube
###Output
_____no_output_____
###Markdown
You can then extract a `sub_cube` in the spatial coordinates of the cube
###Code
# Define desired latitude and longitude range
lat_range = [-46, -40] * u.deg
lon_range = [306, 295] * u.deg
# Create a sub_cube cut to these coordinates
sub_cube = cube.subcube(xlo=lon_range[0], xhi=lon_range[1], ylo=lat_range[0], yhi=lat_range[1])
print(sub_cube)
###Output
_____no_output_____
###Markdown
Cut along the Spectral Axis:We don't really need data from such a large velocity range so let's just extract a little slab. We can do this in any units that we want using the `.spectral_slab()` method.
###Code
sub_cube_slab = sub_cube.spectral_slab(-300. *u.km / u.s, 300. *u.km / u.s)
print(sub_cube_slab)
###Output
_____no_output_____
###Markdown
Moment MapsMoment maps are a useful analysis tool to study data cubes. In short, a moment is a weighted integral along an axis (typically the Spectral Axis) that can give information about the total Intensity (or column density), mean velocity, or velocity dispersion along lines of sight. SpectralCube makes this very simple with the `.moment()` method. We can convert to friendlier spectral units of km/s and these new 2D projections can be saved as new FITS files, complete with modified WCS information as well.
###Code
moment_0 = sub_cube_slab.with_spectral_unit(u.km/u.s).moment(order=0) # Zero-th moment
moment_1 = sub_cube_slab.with_spectral_unit(u.km/u.s).moment(order=1) # First moment
# Write the moments as a FITS image
# moment_0.write('hi_moment_0.fits')
# moment_1.write('hi_moment_1.fits')
print('Moment_0 has units of: ', moment_0.unit)
print('Moment_1 has units of: ', moment_1.unit)
# Convert Moment_0 to a Column Density assuming optically thin media
hi_column_density = moment_0 * 1.82 * 10**18 / (u.cm * u.cm) * u.s / u.K / u.km
###Output
_____no_output_____
###Markdown
Display the Moment MapsThe [WCSAxes](http://docs.astropy.org/en/stable/visualization/wcsaxes/) framework in Astropy allows us to display images with different coordinate axes and projections.As long as we have a WCS object associated with the data, we can transfer that projection to a matplotlib axis. SpectralCube makes it possible to access just the WCS object associated with a cube object.
###Code
print(moment_1.wcs) # Examine the WCS object associated with the moment map
###Output
_____no_output_____
###Markdown
As expected, the first moment image we created only has two axes (Galactic Longitude and Galactic Latitude). We can pass in this WCS object directly into a matplotlib axis instance.
###Code
# Initiate a figure and axis object with WCS projection information
fig = plt.figure(figsize=(18, 12))
ax = fig.add_subplot(111, projection=moment_1.wcs)
# Display the moment map image
im = ax.imshow(moment_1.hdu.data, cmap='RdBu_r', vmin=0, vmax=200)
ax.invert_yaxis() # Flips the Y axis
# Add axes labels
ax.set_xlabel("Galactic Longitude (degrees)", fontsize=16)
ax.set_ylabel("Galactic Latitude (degrees)", fontsize=16)
# Add a colorbar
cbar = plt.colorbar(im, pad=.07)
cbar.set_label('Velocity (km/s)', size=16)
# Overlay set of RA/Dec Axes
overlay = ax.get_coords_overlay('fk5')
overlay.grid(color='white', ls='dotted', lw=2)
overlay[0].set_axislabel('Right Ascension (J2000)', fontsize=16)
overlay[1].set_axislabel('Declination (J2000)', fontsize=16)
# Overplot column density contours
levels = (1e20, 5e20, 1e21, 3e21, 5e21, 7e21, 1e22) # Define contour levels to use
ax.contour(hi_column_density.hdu.data, cmap='Greys_r', alpha=0.5, levels=levels)
###Output
_____no_output_____
###Markdown
As you can see, the WCSAxes framework is very powerful and similiar to making any matplotlib style plot. Display a Longitude-Velocity SliceThe [WCSAxes](http://docs.astropy.org/en/stable/visualization/wcsaxes/) framework in Astropy also lets us slice the data accross different dimensions. It is often useful to slice along a single latitude and display an image showing longtitude and velocity information only (position-velocity or longitude-velocity diagram).This can be done by specifying the `slices` keyword and selecting the appropriate slice through the data. `slices` requires a 3D tuple containing the index to be sliced along and where we want the two axes to be displayed. This should be specified in the same order as the WCS object (longitude, latitude, velocity) as opposed to the order of numpy array holding the data (velocity, latitude, longitude). We then select the appropriate data by indexing along the numpy array.
###Code
lat_slice = 18 # Index of latitude dimension to slice along
# Initiate a figure and axis object with WCS projection information
fig = plt.figure(figsize=(18, 12))
ax = fig.add_subplot(111, projection=sub_cube_slab.wcs, slices=('y', lat_slice, 'x'))
# Above, we have specified to plot the longitude along the y axis, pick only the lat_slice
# indicated, and plot the velocity along the x axis
# Display the slice
im = ax.imshow(sub_cube_slab[:, lat_slice, :].transpose().data) # Display the image slice
ax.invert_yaxis() # Flips the Y axis
# Add axes labels
ax.set_xlabel("LSR Velocity (m/s)", fontsize=16)
ax.set_ylabel("Galactic Longitude (degrees)", fontsize=16)
# Add a colorbar
cbar = plt.colorbar(im, pad=.07, orientation='horizontal')
cbar.set_label('Temperature (K)', size=16)
###Output
_____no_output_____
###Markdown
As we can see, the SMC seems to be only along positive velocities. Try: Create a new spectral slab isolating just the SMC and slice along a different dimension to create a latitude-velocity diagram Find and Download a Herschel ImageThis is great, but we want to compare the HI emission data with Herschel 350 micron emission to trace some dust. This can be done with [astroquery](http://www.astropy.org/astroquery/). We can query for the data by mission, take a quick look at the table of results, and download data after selecting a specific wavelength or filter. Since we are looking for Herschel data from an ESA mission, we will use the [astroquery.ESASky](http://astroquery.readthedocs.io/en/latest/esasky/esasky.html) class.Specifically, the `ESASKY.query_region_maps()` method allows us to search for a specific region of the sky either using an Astropy SkyCoord object or a string specifying an object name. In this case, we can just search for the SMC. A radius to search around the object can also be specified.
###Code
# Query for Herschel data in a 1 degree radius around the SMC
result = ESASky.query_region_maps('SMC', radius=1*u.deg, missions='Herschel')
print(result)
###Output
_____no_output_____
###Markdown
Here, the result is a TableList which contains 24 Herschel data products that can be downloaded. We can see what information is available in this TableList by examining the keys in the Herschel Table.
###Code
result['HERSCHEL'].keys()
###Output
_____no_output_____
###Markdown
We want to find a 350 micron image, so we need to look closer at the filters used for these observations.
###Code
result['HERSCHEL']['filter']
###Output
_____no_output_____
###Markdown
Luckily for us, there is an observation made with three filters: 250, 350, and 500 microns. This is the object we will want to download. One way to do this is by making a boolean mask to select out the Table entry corresponding with the desired filter. Then, the `ESASky.get_maps()` method will download our data provided a TableList argument. Note that the below command requires an internet connection to download five quite large files. It could take several minutes to complete.
###Code
filters = result['HERSCHEL']['filter'].astype(str) # Convert the list of filters from the query to a string
# Construct a boolean mask, searching for only the desired filters
mask = np.array(['250, 350, 500' == s for s in filters], dtype='bool')
# Re-construct a new TableList object containing only our desired query entry
target_obs = TableList({"HERSCHEL":result['HERSCHEL'][mask]}) # This will be passed into ESASky.get_maps()
IR_images = ESASky.get_maps(target_obs) # Download the images
IR_images['HERSCHEL'][0]['350'].info() # Display some information about the 350 micron image
###Output
_____no_output_____
###Markdown
Since we are just doing some qualitative analysis, we only need the image, but you can also access lots of other information from our downloaded object, such as errors. Let's go ahead and extract just the WCS information and image data from the 350 micron image.
###Code
herschel_header = IR_images['HERSCHEL'][0]['350']['image'].header
herschel_wcs = WCS(IR_images['HERSCHEL'][0]['350']['image']) # Extract WCS information
herschel_imagehdu = IR_images['HERSCHEL'][0]['350']['image'] # Extract Image data
print(herschel_wcs)
###Output
_____no_output_____
###Markdown
With this, we can display this image using matplotlib with [WCSAxes](http://docs.astropy.org/en/stable/visualization/wcsaxes/index.html) and the `LogNorm()` object so we can log scale our image.
###Code
# Set Nans to zero
himage_nan_locs = np.isnan(herschel_imagehdu.data)
herschel_data_nonans = herschel_imagehdu.data
herschel_data_nonans[himage_nan_locs] = 0
# Initiate a figure and axis object with WCS projection information
fig = plt.figure(figsize=(18, 12))
ax = fig.add_subplot(111, projection=herschel_wcs)
# Display the moment map image
im = ax.imshow(herschel_data_nonans, cmap='viridis',
norm=LogNorm(), vmin=2, vmax=50)
# ax.invert_yaxis() # Flips the Y axis
# Add axes labels
ax.set_xlabel("Right Ascension", fontsize = 16)
ax.set_ylabel("Declination", fontsize = 16)
ax.grid(color = 'white', ls = 'dotted', lw = 2)
# Add a colorbar
cbar = plt.colorbar(im, pad=.07)
cbar.set_label(''.join(['Herschel 350'r'$\mu$m ','(', herschel_header['BUNIT'], ')']), size = 16)
# Overlay set of Galactic Coordinate Axes
overlay = ax.get_coords_overlay('galactic')
overlay.grid(color='black', ls='dotted', lw=1)
overlay[0].set_axislabel('Galactic Longitude', fontsize=14)
overlay[1].set_axislabel('Galactic Latitude', fontsize=14)
###Output
_____no_output_____
###Markdown
Overlay HI 21 cm Contours on the IR 30 micron ImageTo visually compare the neutral gas and dust as traced by HI 21 cm emission and IR 30 micron emission, we can use contours and colorscale images produced using the [WCSAxes](http://docs.astropy.org/en/stable/visualization/wcsaxes/index.html) framework and the `.get_transform()` method. The [WCSAxes.get_transform()](http://docs.astropy.org/en/stable/api/astropy.visualization.wcsaxes.WCSAxes.htmlastropy.visualization.wcsaxes.WCSAxes.get_transform) method returns a transformation from a specified frame to the pixel/data coordinates. It accepts a string specifying the frame or a WCS object.
###Code
# Initiate a figure and axis object with WCS projection information
fig = plt.figure(figsize=(18, 12))
ax = fig.add_subplot(111, projection=herschel_wcs)
# Display the moment map image
im = ax.imshow(herschel_data_nonans, cmap='viridis',
norm=LogNorm(), vmin=5, vmax=50, alpha=.8)
# ax.invert_yaxis() # Flips the Y axis
# Add axes labels
ax.set_xlabel("Right Ascension", fontsize=16)
ax.set_ylabel("Declination", fontsize=16)
ax.grid(color = 'white', ls='dotted', lw=2)
# Extract x and y coordinate limits
x_lim = ax.get_xlim()
y_lim = ax.get_ylim()
# Add a colorbar
cbar = plt.colorbar(im, fraction=0.046, pad=-0.1)
cbar.set_label(''.join(['Herschel 350'r'$\mu$m ','(', herschel_header['BUNIT'], ')']), size=16)
# Overlay set of RA/Dec Axes
overlay = ax.get_coords_overlay('galactic')
overlay.grid(color='black', ls='dotted', lw=1)
overlay[0].set_axislabel('Galactic Longitude', fontsize=14)
overlay[1].set_axislabel('Galactic Latitude', fontsize=14)
hi_transform = ax.get_transform(hi_column_density.wcs) # extract axes Transform information for the HI data
# Overplot column density contours
levels = (2e21, 3e21, 5e21, 7e21, 8e21, 1e22) # Define contour levels to use
ax.contour(hi_column_density.hdu.data, cmap='Greys_r', alpha=0.8, levels=levels,
transform=hi_transform) # include the transform information with the keyword "transform"
# Overplot velocity image so we can also see the Gas velocities
im_hi = ax.imshow(moment_1.hdu.data, cmap='RdBu_r', vmin=0, vmax=200, alpha=0.5, transform=hi_transform)
# Add a second colorbar for the HI Velocity information
cbar_hi = plt.colorbar(im_hi, orientation='horizontal', fraction=0.046, pad=0.07)
cbar_hi.set_label('HI 'r'$21$cm Mean Velocity (km/s)', size=16)
# Apply original image x and y coordinate limits
ax.set_xlim(x_lim)
ax.set_ylim(y_lim)
###Output
_____no_output_____
###Markdown
Using reproject to match image resolutionsThe [reproject](https://reproject.readthedocs.io/en/stable/) package is a powerful tool allowing for image data to be transformed into a variety of projections and resolutions. It's most powerful use is in fact to transform data from one map projection to another without losing any information and still properly conserving flux values within the data. It even has a method to perform a fast reprojection if you are not too concerned with the absolute accuracy of the data values. A simple use of the reproject package is to scale down (or up) resolutions of an image artificially. This could be a useful step if you are trying to get emission line ratios or directly compare the Intensity or Flux from a tracer to that of another tracer in the same physical point of the sky. From our previously made images, we can see that the IR Herschel Image has a higher spatial resolution than that of the HI data cube. We can look into this more by taking a better look at both header objects and using reproject to downscale the Herschel Image.
###Code
print('IR Resolution (dx,dy) = ', herschel_header['cdelt1'], herschel_header['cdelt2'])
print('HI Resolution (dx,dy) = ', hi_column_density.hdu.header['cdelt1'], hi_column_density.hdu.header['cdelt1'])
###Output
_____no_output_____
###Markdown
Note: Different ways of accessing the header are shown above corresponding to the different object types (coming from SpectralCube vs astropy.io.fits) As we can see, the IR data has over 10 times higher spatial resolution. In order to create a new projection of an image, all we need to specifiy is a new header containing WCS information to transform into. These can be created manually if you wanted to completely change something about the projection type (i.e. going from a Mercator map projection to a Tangential map projection). For us, since we want to match our resolutions, we can just "steal" the WCS object from the HI data. Specifically, we will be using the [reproject_interp()](https://reproject.readthedocs.io/en/stable/api/reproject.reproject_interp.htmlreproject.reproject_interp) function. This takes two arguments: an HDU object that you want to reproject, and a header containing WCS information to reproject onto.
###Code
rescaled_herschel_data, _ = reproject_interp(herschel_imagehdu,
# reproject the Herschal image to match the HI data
hi_column_density.hdu.header)
rescaled_herschel_imagehdu = fits.PrimaryHDU(data = rescaled_herschel_data,
# wrap up our reprojection as a new fits HDU object
header = hi_column_density.hdu.header)
###Output
_____no_output_____
###Markdown
`rescaled_herschel_imagehdu` will now behave just like the other FITS images we have been working with, but now with a degraded resolution matching the HI data. This includes having its native coordinates in Galactic rather than RA and Dec.
###Code
# Set Nans to zero
image_nan_locs = np.isnan(rescaled_herschel_imagehdu.data)
rescaled_herschel_data_nonans = rescaled_herschel_imagehdu.data
rescaled_herschel_data_nonans[image_nan_locs] = 0
# Initiate a figure and axis object with WCS projection information
fig = plt.figure(figsize = (18,12))
ax = fig.add_subplot(111,projection = WCS(rescaled_herschel_imagehdu))
# Display the moment map image
im = ax.imshow(rescaled_herschel_data_nonans, cmap = 'viridis',
norm = LogNorm(), vmin = 5, vmax = 50, alpha = .8)
#im = ax.imshow(rescaled_herschel_imagehdu.data, cmap = 'viridis',
# norm = LogNorm(), vmin = 5, vmax = 50, alpha = .8)
ax.invert_yaxis() # Flips the Y axis
# Add axes labels
ax.set_xlabel("Galactic Longitude", fontsize = 16)
ax.set_ylabel("Galactic Latitude", fontsize = 16)
ax.grid(color = 'white', ls = 'dotted', lw = 2)
# Extract x and y coordinate limits
x_lim = ax.get_xlim()
y_lim = ax.get_ylim()
# Add a colorbar
cbar = plt.colorbar(im, fraction=0.046, pad=-0.1)
cbar.set_label(''.join(['Herschel 350'r'$\mu$m ','(', herschel_header['BUNIT'], ')']), size = 16)
# Overlay set of RA/Dec Axes
overlay = ax.get_coords_overlay('fk5')
overlay.grid(color='black', ls='dotted', lw = 1)
overlay[0].set_axislabel('Right Ascension', fontsize = 14)
overlay[1].set_axislabel('Declination', fontsize = 14)
hi_transform = ax.get_transform(hi_column_density.wcs) # extract axes Transform information for the HI data
# Overplot column density contours
levels = (2e21, 3e21, 5e21, 7e21, 8e21, 1e22) # Define contour levels to use
ax.contour(hi_column_density.hdu.data, cmap = 'Greys_r', alpha = 0.8, levels = levels,
transform = hi_transform) # include the transform information with the keyword "transform"
# Overplot velocity image so we can also see the Gas velocities
im_hi = ax.imshow(moment_1.hdu.data, cmap = 'RdBu_r', vmin = 0, vmax = 200, alpha = 0.5, transform = hi_transform)
# Add a second colorbar for the HI Velocity information
cbar_hi = plt.colorbar(im_hi, orientation = 'horizontal', fraction=0.046, pad=0.07)
cbar_hi.set_label('HI 'r'$21$cm Mean Velocity (km/s)', size = 16)
# Apply original image x and y coordinate limits
ax.set_xlim(x_lim)
ax.set_ylim(y_lim)
###Output
_____no_output_____
###Markdown
Working with FITS-cubes Authors[Dhanesh Krishnarao (DK)](http://www.astronomy.dk), [Shravan Shetty](http://www.astro.wisc.edu/our-people/post-doctoral-students/shetty-shravan/), [Diego Gonzalez-Casanova](http://www.astro.wisc.edu/our-people/graduate-students/gonzalez-casanova-diego/), [Audra Hernandez](http://www.astro.wisc.edu/our-people/scientists/hernandez-audra/), Kris Stern, Kelle Cruz, Stephanie Douglas Learning Goals* Find and download data using `astroquery`* Read and plot slices across different dimensions of a data cube* Compare different data sets (2D and 3D) by overploting contours* Transform coordinate projections and match data resolutions with `reproject`* Create intensity moment maps / velocity maps with `spectral_cube` KeywordsFITS, image manipulation, data cubes, radio astronomy, WCS, astroquery, reproject, spectral cube, matplotlib, contour plots, colorbar SummaryIn this tutorial we will visualize 2D and 3D data sets in Galactic and equatorial coordinates. The tutorial will walk you though a visual analysis of the Small Magellanic Cloud (SMC) using HI 21cm emission and a Herschel 250 micron map. We will learn how to read in data from a file, query and download matching data from Herschel using astroquery, and plot the resulting images in a multitude of ways. The primary libraries we'll be using are: [astroquery](http://www.astropy.org/astroquery/), [spectral_cube](https://spectral-cube.readthedocs.io/en/latest/), [reproject](https://reproject.readthedocs.io/en/stable/), [matplotlib](https://matplotlib.org/)) They can be installed using conda: ```conda install -c conda-forge astroqueryconda install -c conda-forge spectral-cubeconda install -c conda-forge reproject``` Alternatively, if you don't use conda, you can use pip.
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
import astropy.units as u
from astropy.utils.data import download_file
from astropy.io import fits # We use fits to open the actual data file
from astropy.utils import data
data.conf.remote_timeout = 60
from spectral_cube import SpectralCube
from astroquery.esasky import ESASky
from astroquery.utils import TableList
from astropy.wcs import WCS
from reproject import reproject_interp
###Output
_____no_output_____
###Markdown
Download the HI DataWe'll be using HI 21 cm emission data from the [HI4Pi survey](http://adsabs.harvard.edu/cgi-bin/bib_query?arXiv:1610.06175). We want to look at neutral gas emission from the Magellanic Clouds and learn about the kinematics of the system and column densities. Using the VizieR catalog, we've found a relevant data cube to use that covers this region of the sky. You can also download an allsky data cube, but this is a very large file, so picking out sub-sections can be useful!For us, the [relevant file is available via ftp from CDS Strasbourg](http://cdsarc.u-strasbg.fr/vizier/ftp/cats/J/A+A/594/A116/CUBES/GAL/TAN/TAN_C14.fits). We have a reduced version of it which will be a FITS data cube in Galactic coordinates using the tangential sky projection.Sure, we could download this file directly, but why do that when we can load it up via one line of code and have it ready to use in our cache? Download the HI Fits Cube
###Code
# Downloads the HI data in a fits file format
hi_datafile = download_file(
'http://data.astropy.org/tutorials/FITS-cubes/reduced_TAN_C14.fits',
cache=True, show_progress=True)
###Output
_____no_output_____
###Markdown
Awesome, so now we have a copy of the data file (a FITS file). So how do we do anything with it?Luckily for us, the [spectral_cube](https://spectral-cube.readthedocs.io/en/latest/) package does a lot of the nitty gritty work for us to manipulate this data and even quickly look through it. So let's open up our data file and read in the data as a SpectralCube!The variable `cube` has the data using SpectralCube and `hi_data` is the data cube from the FITS file without the special formating from SpectralCube.
###Code
hi_data = fits.open(hi_datafile) # Open the FITS file for reading
cube = SpectralCube.read(hi_data) # Initiate a SpectralCube
hi_data.close() # Close the FITS file - we already read it in and don't need it anymore!
###Output
_____no_output_____
###Markdown
If you happen to already have the FITS file on your system, you can also skip the fits.open step and just directly read a FITS file with SpectralCube like this:`cube = SpectralCube.read('path_to_data_file/TAN_C14.fits') `So what does this SpectralCube object actually look like? Let's find out! The first check is to print out the cube.
###Code
print(cube)
###Output
_____no_output_____
###Markdown
Some things to pay attention to here:A data cube has three axes. In this case, there is Galactic Longitude (x), Galactic Latitude (y), and a spectral axis in terms of a LSR Velocity (z - listed as s with `spectral_cube`).The data hidden in the cube lives as an ndarray with shape (n_s, n_y, n_x) so that axis 0 corresponds with the Spectral Axis, axis 1 corresponds with the Galactic Latitude Axis, and axis 2 corresponds with the Galactic Longitude Axis. When we `print(cube)` we can see the shape, size, and units of all axes as well as the data stored in the cube. With this cube, the units of the data in the cube are temperatures (K). The spatial axes are in degrees and the Spectral Axis is in (meters / second).The cube also contains information about the coordinates corresponding to the data in the form of a WCS (World Coordinate System) object. SpectralCube is clever and keeps all the data masked until you really need it so that you can work with large sets of data. So let's see what our data actually looks like!SpectralCube has a `quicklook()` method which can give a handy sneak-peek preview of the data. It's useful when you just need to glance at a slice or spectrum without knowing any other information (say, to make sure the data isn't corrupted or is looking at the right region.) To do this, we index our cube along one axis (for a slice) or two axes (for a spectrum):
###Code
cube[300, :, :].quicklook() # Slice the cube along the spectral axis, and display a quick image
cube[:, 75, 75].quicklook() # Extract a single spectrum through the data cube
###Output
_____no_output_____
###Markdown
Try messing around with slicing the cube along different axes, or picking out different spectra Make a smaller cube, focusing on the Magellanic CloudsThe HI data cube we downloaded is bigger than we actually need it to be. Let's try zooming in on just the part we need and make a new `sub_cube`. The easiest way to do this is to cut out part of the cube with indices or coordinates.We can extract the world coordinates from the cube using the `.world()` method. Warning: using .world() will extract coordinates from every position you ask for. This can be a TON of data if you don't slice through the cube. One work around is to slice along two axes and extract coordinates just along a single dimension. The output of `.world()` is an Astropy `Quantity` representing the pixel coordinates, which includes units. You can extract these Astropy `Quantity` objects by slicing the data.
###Code
_, b, _ = cube.world[0, :, 0] #extract latitude world coordinates from cube
_, _, l = cube.world[0, 0, :] #extract longitude world coordinates from cube
###Output
_____no_output_____
###Markdown
You can then extract a `sub_cube` in the spatial coordinates of the cube
###Code
# Define desired latitude and longitude range
lat_range = [-46, -40] * u.deg
lon_range = [306, 295] * u.deg
# Create a sub_cube cut to these coordinates
sub_cube = cube.subcube(xlo=lon_range[0], xhi=lon_range[1], ylo=lat_range[0], yhi=lat_range[1])
print(sub_cube)
###Output
_____no_output_____
###Markdown
Cut along the Spectral Axis:We don't really need data from such a large velocity range so let's just extract a little slab. We can do this in any units that we want using the `.spectral_slab()` method.
###Code
sub_cube_slab = sub_cube.spectral_slab(-300. *u.km / u.s, 300. *u.km / u.s)
print(sub_cube_slab)
###Output
_____no_output_____
###Markdown
Moment MapsMoment maps are a useful analysis tool to study data cubes. In short, a moment is a weighted integral along an axis (typically the Spectral Axis) that can give information about the total Intensity (or column density), mean velocity, or velocity dispersion along lines of sight. SpectralCube makes this very simple with the `.moment()` method. We can convert to friendlier spectral units of km/s and these new 2D projections can be saved as new FITS files, complete with modified WCS information as well.
###Code
moment_0 = sub_cube_slab.with_spectral_unit(u.km/u.s).moment(order=0) # Zero-th moment
moment_1 = sub_cube_slab.with_spectral_unit(u.km/u.s).moment(order=1) # First moment
# Write the moments as a FITS image
# moment_0.write('hi_moment_0.fits')
# moment_1.write('hi_moment_1.fits')
print('Moment_0 has units of: ', moment_0.unit)
print('Moment_1 has units of: ', moment_1.unit)
# Convert Moment_0 to a Column Density assuming optically thin media
hi_column_density = moment_0 * 1.82 * 10**18 / (u.cm * u.cm) * u.s / u.K / u.km
###Output
_____no_output_____
###Markdown
Display the Moment MapsThe [WCSAxes](http://docs.astropy.org/en/stable/visualization/wcsaxes/) framework in Astropy allows us to display images with different coordinate axes and projections.As long as we have a WCS object associated with the data, we can transfer that projection to a matplotlib axis. SpectralCube makes it possible to access just the WCS object associated with a cube object.
###Code
print(moment_1.wcs) # Examine the WCS object associated with the moment map
###Output
_____no_output_____
###Markdown
As expected, the first moment image we created only has two axes (Galactic Longitude and Galactic Latitude). We can pass in this WCS object directly into a matplotlib axis instance.
###Code
# Initiate a figure and axis object with WCS projection information
fig = plt.figure(figsize=(18, 12))
ax = fig.add_subplot(111, projection=moment_1.wcs)
# Display the moment map image
im = ax.imshow(moment_1.hdu.data, cmap='RdBu_r', vmin=0, vmax=200)
ax.invert_yaxis() # Flips the Y axis
# Add axes labels
ax.set_xlabel("Galactic Longitude (degrees)", fontsize=16)
ax.set_ylabel("Galactic Latitude (degrees)", fontsize=16)
# Add a colorbar
cbar = plt.colorbar(im, pad=.07)
cbar.set_label('Velocity (km/s)', size=16)
# Overlay set of RA/Dec Axes
overlay = ax.get_coords_overlay('fk5')
overlay.grid(color='white', ls='dotted', lw=2)
overlay[0].set_axislabel('Right Ascension (J2000)', fontsize=16)
overlay[1].set_axislabel('Declination (J2000)', fontsize=16)
# Overplot column density contours
levels = (1e20, 5e20, 1e21, 3e21, 5e21, 7e21, 1e22) # Define contour levels to use
ax.contour(hi_column_density.hdu.data, cmap='Greys_r', alpha=0.5, levels=levels)
###Output
_____no_output_____
###Markdown
As you can see, the WCSAxes framework is very powerful and similiar to making any matplotlib style plot. Display a Longitude-Velocity SliceThe [WCSAxes](http://docs.astropy.org/en/stable/visualization/wcsaxes/) framework in Astropy also lets us slice the data accross different dimensions. It is often useful to slice along a single latitude and display an image showing longtitude and velocity information only (position-velocity or longitude-velocity diagram).This can be done by specifying the `slices` keyword and selecting the appropriate slice through the data. `slices` requires a 3D tuple containing the index to be sliced along and where we want the two axes to be displayed. This should be specified in the same order as the WCS object (longitude, latitude, velocity) as opposed to the order of numpy array holding the data (velocity, latitude, longitude). We then select the appropriate data by indexing along the numpy array.
###Code
lat_slice = 18 # Index of latitude dimension to slice along
# Initiate a figure and axis object with WCS projection information
fig = plt.figure(figsize=(18, 12))
ax = fig.add_subplot(111, projection=sub_cube_slab.wcs, slices=('y', lat_slice, 'x'))
# Above, we have specified to plot the longitude along the y axis, pick only the lat_slice
# indicated, and plot the velocity along the x axis
# Display the slice
im = ax.imshow(sub_cube_slab[:, lat_slice, :].transpose().data) # Display the image slice
ax.invert_yaxis() # Flips the Y axis
# Add axes labels
ax.set_xlabel("LSR Velocity (m/s)", fontsize=16)
ax.set_ylabel("Galactic Longitude (degrees)", fontsize=16)
# Add a colorbar
cbar = plt.colorbar(im, pad=.07, orientation='horizontal')
cbar.set_label('Temperature (K)', size=16)
###Output
_____no_output_____
###Markdown
As we can see, the SMC seems to be only along positive velocities. Try: Create a new spectral slab isolating just the SMC and slice along a different dimension to create a latitude-velocity diagram Find and Download a Herschel ImageThis is great, but we want to compare the HI emission data with Herschel 350 micron emission to trace some dust. This can be done with [astroquery](http://www.astropy.org/astroquery/). We can query for the data by mission, take a quick look at the table of results, and download data after selecting a specific wavelength or filter. Since we are looking for Herschel data from an ESA mission, we will use the [astroquery.ESASky](http://astroquery.readthedocs.io/en/latest/esasky/esasky.html) class.Specifically, the `ESASKY.query_region_maps()` method allows us to search for a specific region of the sky either using an Astropy SkyCoord object or a string specifying an object name. In this case, we can just search for the SMC. A radius to search around the object can also be specified.
###Code
# Query for Herschel data in a 1 degree radius around the SMC
result = ESASky.query_region_maps('SMC', radius=1*u.deg, missions='Herschel')
print(result)
###Output
_____no_output_____
###Markdown
Here, the result is a TableList which contains 24 Herschel data products that can be downloaded. We can see what information is available in this TableList by examining the keys in the Herschel Table.
###Code
result['HERSCHEL'].keys()
###Output
_____no_output_____
###Markdown
We want to find a 350 micron image, so we need to look closer at the filters used for these observations.
###Code
result['HERSCHEL']['filter']
###Output
_____no_output_____
###Markdown
Luckily for us, there is an observation made with three filters: 250, 350, and 500 microns. This is the object we will want to download. One way to do this is by making a boolean mask to select out the Table entry corresponding with the desired filter. Then, the `ESASky.get_maps()` method will download our data provided a TableList argument. Note that the below command requires an internet connection to download five quite large files. It could take several minutes to complete.
###Code
filters = result['HERSCHEL']['filter'].astype(str) # Convert the list of filters from the query to a string
# Construct a boolean mask, searching for only the desired filters
mask = np.array(['250, 350, 500' == s for s in filters], dtype='bool')
# Re-construct a new TableList object containing only our desired query entry
target_obs = TableList({"HERSCHEL":result['HERSCHEL'][mask]}) # This will be passed into ESASky.get_maps()
IR_images = ESASky.get_maps(target_obs) # Download the images
IR_images['HERSCHEL'][0]['350'].info() # Display some information about the 350 micron image
###Output
_____no_output_____
###Markdown
Since we are just doing some qualitative analysis, we only need the image, but you can also access lots of other information from our downloaded object, such as errors. Let's go ahead and extract just the WCS information and image data from the 350 micron image.
###Code
herschel_header = IR_images['HERSCHEL'][0]['350']['image'].header
herschel_wcs = WCS(IR_images['HERSCHEL'][0]['350']['image']) # Extract WCS information
herschel_imagehdu = IR_images['HERSCHEL'][0]['350']['image'] # Extract Image data
print(herschel_wcs)
###Output
_____no_output_____
###Markdown
With this, we can display this image using matplotlib with [WCSAxes](http://docs.astropy.org/en/stable/visualization/wcsaxes/index.html) and the `LogNorm()` object so we can log scale our image.
###Code
# Set Nans to zero
himage_nan_locs = np.isnan(herschel_imagehdu.data)
herschel_data_nonans = herschel_imagehdu.data
herschel_data_nonans[himage_nan_locs] = 0
# Initiate a figure and axis object with WCS projection information
fig = plt.figure(figsize=(18, 12))
ax = fig.add_subplot(111, projection=herschel_wcs)
# Display the moment map image
im = ax.imshow(herschel_data_nonans, cmap='viridis',
norm=LogNorm(vmin=2, vmax=50))
# ax.invert_yaxis() # Flips the Y axis
# Add axes labels
ax.set_xlabel("Right Ascension", fontsize = 16)
ax.set_ylabel("Declination", fontsize = 16)
ax.grid(color = 'white', ls = 'dotted', lw = 2)
# Add a colorbar
cbar = plt.colorbar(im, pad=.07)
cbar.set_label(''.join(['Herschel 350'r'$\mu$m ','(', herschel_header['BUNIT'], ')']), size = 16)
# Overlay set of Galactic Coordinate Axes
overlay = ax.get_coords_overlay('galactic')
overlay.grid(color='black', ls='dotted', lw=1)
overlay[0].set_axislabel('Galactic Longitude', fontsize=14)
overlay[1].set_axislabel('Galactic Latitude', fontsize=14)
###Output
_____no_output_____
###Markdown
Overlay HI 21 cm Contours on the IR 30 micron ImageTo visually compare the neutral gas and dust as traced by HI 21 cm emission and IR 30 micron emission, we can use contours and colorscale images produced using the [WCSAxes](http://docs.astropy.org/en/stable/visualization/wcsaxes/index.html) framework and the `.get_transform()` method. The [WCSAxes.get_transform()](http://docs.astropy.org/en/stable/api/astropy.visualization.wcsaxes.WCSAxes.htmlastropy.visualization.wcsaxes.WCSAxes.get_transform) method returns a transformation from a specified frame to the pixel/data coordinates. It accepts a string specifying the frame or a WCS object.
###Code
# Initiate a figure and axis object with WCS projection information
fig = plt.figure(figsize=(18, 12))
ax = fig.add_subplot(111, projection=herschel_wcs)
# Display the moment map image
im = ax.imshow(herschel_data_nonans, cmap='viridis',
norm=LogNorm(vmin=5, vmax=50), alpha=.8)
# ax.invert_yaxis() # Flips the Y axis
# Add axes labels
ax.set_xlabel("Right Ascension", fontsize=16)
ax.set_ylabel("Declination", fontsize=16)
ax.grid(color = 'white', ls='dotted', lw=2)
# Extract x and y coordinate limits
x_lim = ax.get_xlim()
y_lim = ax.get_ylim()
# Add a colorbar
cbar = plt.colorbar(im, fraction=0.046, pad=-0.1)
cbar.set_label(''.join(['Herschel 350'r'$\mu$m ','(', herschel_header['BUNIT'], ')']), size=16)
# Overlay set of RA/Dec Axes
overlay = ax.get_coords_overlay('galactic')
overlay.grid(color='black', ls='dotted', lw=1)
overlay[0].set_axislabel('Galactic Longitude', fontsize=14)
overlay[1].set_axislabel('Galactic Latitude', fontsize=14)
hi_transform = ax.get_transform(hi_column_density.wcs) # extract axes Transform information for the HI data
# Overplot column density contours
levels = (2e21, 3e21, 5e21, 7e21, 8e21, 1e22) # Define contour levels to use
ax.contour(hi_column_density.hdu.data, cmap='Greys_r', alpha=0.8, levels=levels,
transform=hi_transform) # include the transform information with the keyword "transform"
# Overplot velocity image so we can also see the Gas velocities
im_hi = ax.imshow(moment_1.hdu.data, cmap='RdBu_r', vmin=0, vmax=200, alpha=0.5, transform=hi_transform)
# Add a second colorbar for the HI Velocity information
cbar_hi = plt.colorbar(im_hi, orientation='horizontal', fraction=0.046, pad=0.07)
cbar_hi.set_label('HI 'r'$21$cm Mean Velocity (km/s)', size=16)
# Apply original image x and y coordinate limits
ax.set_xlim(x_lim)
ax.set_ylim(y_lim)
###Output
_____no_output_____
###Markdown
Using reproject to match image resolutionsThe [reproject](https://reproject.readthedocs.io/en/stable/) package is a powerful tool allowing for image data to be transformed into a variety of projections and resolutions. It's most powerful use is in fact to transform data from one map projection to another without losing any information and still properly conserving flux values within the data. It even has a method to perform a fast reprojection if you are not too concerned with the absolute accuracy of the data values. A simple use of the reproject package is to scale down (or up) resolutions of an image artificially. This could be a useful step if you are trying to get emission line ratios or directly compare the Intensity or Flux from a tracer to that of another tracer in the same physical point of the sky. From our previously made images, we can see that the IR Herschel Image has a higher spatial resolution than that of the HI data cube. We can look into this more by taking a better look at both header objects and using reproject to downscale the Herschel Image.
###Code
print('IR Resolution (dx,dy) = ', herschel_header['cdelt1'], herschel_header['cdelt2'])
print('HI Resolution (dx,dy) = ', hi_column_density.hdu.header['cdelt1'], hi_column_density.hdu.header['cdelt1'])
###Output
_____no_output_____
###Markdown
Note: Different ways of accessing the header are shown above corresponding to the different object types (coming from SpectralCube vs astropy.io.fits) As we can see, the IR data has over 10 times higher spatial resolution. In order to create a new projection of an image, all we need to specifiy is a new header containing WCS information to transform into. These can be created manually if you wanted to completely change something about the projection type (i.e. going from a Mercator map projection to a Tangential map projection). For us, since we want to match our resolutions, we can just "steal" the WCS object from the HI data. Specifically, we will be using the [reproject_interp()](https://reproject.readthedocs.io/en/stable/api/reproject.reproject_interp.htmlreproject.reproject_interp) function. This takes two arguments: an HDU object that you want to reproject, and a header containing WCS information to reproject onto.
###Code
rescaled_herschel_data, _ = reproject_interp(herschel_imagehdu,
# reproject the Herschal image to match the HI data
hi_column_density.hdu.header)
rescaled_herschel_imagehdu = fits.PrimaryHDU(data = rescaled_herschel_data,
# wrap up our reprojection as a new fits HDU object
header = hi_column_density.hdu.header)
###Output
_____no_output_____
###Markdown
`rescaled_herschel_imagehdu` will now behave just like the other FITS images we have been working with, but now with a degraded resolution matching the HI data. This includes having its native coordinates in Galactic rather than RA and Dec.
###Code
# Set Nans to zero
image_nan_locs = np.isnan(rescaled_herschel_imagehdu.data)
rescaled_herschel_data_nonans = rescaled_herschel_imagehdu.data
rescaled_herschel_data_nonans[image_nan_locs] = 0
# Initiate a figure and axis object with WCS projection information
fig = plt.figure(figsize = (18,12))
ax = fig.add_subplot(111,projection = WCS(rescaled_herschel_imagehdu))
# Display the moment map image
im = ax.imshow(rescaled_herschel_data_nonans, cmap = 'viridis',
norm = LogNorm(vmin=5, vmax=50), alpha = .8)
#im = ax.imshow(rescaled_herschel_imagehdu.data, cmap = 'viridis',
# norm = LogNorm(), vmin = 5, vmax = 50, alpha = .8)
ax.invert_yaxis() # Flips the Y axis
# Add axes labels
ax.set_xlabel("Galactic Longitude", fontsize = 16)
ax.set_ylabel("Galactic Latitude", fontsize = 16)
ax.grid(color = 'white', ls = 'dotted', lw = 2)
# Extract x and y coordinate limits
x_lim = ax.get_xlim()
y_lim = ax.get_ylim()
# Add a colorbar
cbar = plt.colorbar(im, fraction=0.046, pad=-0.1)
cbar.set_label(''.join(['Herschel 350'r'$\mu$m ','(', herschel_header['BUNIT'], ')']), size = 16)
# Overlay set of RA/Dec Axes
overlay = ax.get_coords_overlay('fk5')
overlay.grid(color='black', ls='dotted', lw = 1)
overlay[0].set_axislabel('Right Ascension', fontsize = 14)
overlay[1].set_axislabel('Declination', fontsize = 14)
hi_transform = ax.get_transform(hi_column_density.wcs) # extract axes Transform information for the HI data
# Overplot column density contours
levels = (2e21, 3e21, 5e21, 7e21, 8e21, 1e22) # Define contour levels to use
ax.contour(hi_column_density.hdu.data, cmap = 'Greys_r', alpha = 0.8, levels = levels,
transform = hi_transform) # include the transform information with the keyword "transform"
# Overplot velocity image so we can also see the Gas velocities
im_hi = ax.imshow(moment_1.hdu.data, cmap = 'RdBu_r', vmin = 0, vmax = 200, alpha = 0.5, transform = hi_transform)
# Add a second colorbar for the HI Velocity information
cbar_hi = plt.colorbar(im_hi, orientation = 'horizontal', fraction=0.046, pad=0.07)
cbar_hi.set_label('HI 'r'$21$cm Mean Velocity (km/s)', size = 16)
# Apply original image x and y coordinate limits
ax.set_xlim(x_lim)
ax.set_ylim(y_lim)
###Output
_____no_output_____ |
Replacing sigmoid by piecewise linear function.ipynb | ###Markdown
Cross Entropy loss
###Code
def compute_loss(Y, Y_hat):
m = Y.shape[1]
L = -(1./m) * ( np.sum( np.multiply(np.log(Y_hat),Y) ) + np.sum( np.multiply(np.log(1-Y_hat),(1-Y)) ) )
return L
X = X_train
Y = y_train
n_x = X.shape[0]
n_h = 2*n_x
learning_rate = 0.2
def neuralnet(activation,derivative,epochs):
np.random.seed(101)
W1 = np.random.randn(n_h, n_x)*np.sqrt(2/n_x)
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(1, n_h)*np.sqrt(2/n_h)
b2 = np.zeros((1, 1))
act=activation
der=derivative
for i in range(epochs):
Z1 = np.matmul(W1, X) + b1
A1 = act(Z1)
Z2 = np.matmul(W2, A1) + b2
A2 = sigmoid(Z2)
cost = compute_loss(Y, A2)
dZ2 = A2-Y
dW2 = (1./m) * np.matmul(dZ2, A1.T)
db2 = (1./m) * np.sum(dZ2, axis=1, keepdims=True)
dA1 = np.matmul(W2.T, dZ2)
dZ1 = dA1* der(Z1)
dW1 = (1./m) * np.matmul(dZ1, X.T)
db1 = (1./m) * np.sum(dZ1, axis=1, keepdims=True)
W2 = W2 - learning_rate * dW2
b2 = b2 - learning_rate * db2
W1 = W1 - learning_rate * dW1
b1 = b1 - learning_rate * db1
if i % 1000 == 0:
print("Epoch", i, "cost: ", cost)
print("Final cost:", cost)
Z1 = np.matmul(W1, X_train) + b1
A1 = act(Z1)
Z2 = np.matmul(W2, A1) + b2
A2 = sigmoid(Z2)
predictions = (A2>.5)[0,:]
labels = (y_train == 1)[0,:]
print(confusion_matrix(predictions, labels))
print(classification_report(predictions, labels))
print('Accuracy =' , accuracy_score(labels, predictions))
def sigmoid(z):
s = 1 / (1 + np.exp(-z))
return s
def der_sigmoid(z):
s= sigmoid(z) * (1 - sigmoid(z))
return s
%%time
neuralnet(sigmoid,der_sigmoid,20000)
###Output
Epoch 0 cost: 0.8868684619934128
Epoch 1000 cost: 0.4551600128917266
Epoch 2000 cost: 0.4431393976503132
Epoch 3000 cost: 0.43379838536746596
Epoch 4000 cost: 0.4230327577937586
Epoch 5000 cost: 0.4126611329780337
Epoch 6000 cost: 0.40258432490284823
Epoch 7000 cost: 0.39402331804922147
Epoch 8000 cost: 0.3868843960357709
Epoch 9000 cost: 0.3807793727494434
Epoch 10000 cost: 0.3751514105273869
Epoch 11000 cost: 0.3695615070263004
Epoch 12000 cost: 0.3631341400157991
Epoch 13000 cost: 0.355139142501943
Epoch 14000 cost: 0.34733460144953404
Epoch 15000 cost: 0.3393775976685996
Epoch 16000 cost: 0.3310747968737648
Epoch 17000 cost: 0.32299151124682945
Epoch 18000 cost: 0.3164564818463449
Epoch 19000 cost: 0.3108649551218648
Final cost: 0.3057873769621363
[[456 61]
[ 44 207]]
precision recall f1-score support
False 0.91 0.88 0.90 517
True 0.77 0.82 0.80 251
avg / total 0.87 0.86 0.86 768
Accuracy = 0.86328125
CPU times: user 10.7 s, sys: 59.8 ms, total: 10.7 s
Wall time: 5.47 s
###Markdown
Piecewise linear function (k=4)
###Code
from numpy import vectorize
def piecewise_linear_4(x):
if x<=-2:
y=0
elif ((x>-2)&(x<0)):
y=(x/4)-2
elif ((x>0) & (x<2)):
y=(x/4)+1/2
elif x>=2:
y=1
return y
pwl_4 = vectorize(piecewise_linear_4)
#derivative definition
def piecewise_linear_4_der(x):
if x<=-2:
y=0
elif ((x>-2)&(x<0)):
y=1/4
elif ((x>0) & (x<2)):
y=1/4
elif x>=2:
y=0
return y
pwl_4_der= vectorize(piecewise_linear_4_der)
%%time
neuralnet(pwl_4,pwl_4_der,20000)
###Output
Epoch 0 cost: 1.0596049100656253
Epoch 1000 cost: 0.48093041738419684
Epoch 2000 cost: 0.47360109819481067
Epoch 3000 cost: 0.4750396982184158
Epoch 4000 cost: 0.47869013062753557
Epoch 5000 cost: 0.47808341813847643
Epoch 6000 cost: 0.47947331028856877
Epoch 7000 cost: 0.47858176370035865
Epoch 8000 cost: 0.48037103662268466
Epoch 9000 cost: 0.4801357434682837
Epoch 10000 cost: 0.47821915780084756
Epoch 11000 cost: 0.4753717204268402
Epoch 12000 cost: 0.47508101683690285
Epoch 13000 cost: 0.4742212189970463
Epoch 14000 cost: 0.4741272734710815
Epoch 15000 cost: 0.4769135703477567
Epoch 16000 cost: 0.4759321649501502
Epoch 17000 cost: 0.4731783132726015
Epoch 18000 cost: 0.47471363757723595
Epoch 19000 cost: 0.47982315624374855
Final cost: 0.4795374635658467
[[437 103]
[ 63 165]]
precision recall f1-score support
False 0.87 0.81 0.84 540
True 0.62 0.72 0.67 228
avg / total 0.80 0.78 0.79 768
Accuracy = 0.7838541666666666
CPU times: user 4min 18s, sys: 1.23 s, total: 4min 20s
Wall time: 2min 12s
###Markdown
Piecewise linear function (k=6)
###Code
def piecewise_linear_6(x):
if x<=-4:
y=0
elif ((x>-4)&(x<=-2)):
y=0.05*x+0.2
elif ((x>-2) & (x<=0)):
y=0.20*x+0.5
elif ((x>0) & (x<=2)):
y=0.20*x+0.5
elif ((x>2) & (x<=4)):
y=0.05*x+0.8
elif x>4:
y=1
return y
pwl_6 = vectorize(piecewise_linear_6)
def piecewise_linear_6_der(x):
if x<=-4:
y=0
elif ((x>-4)&(x<=-2)):
y=0.05
elif ((x>-2) & (x<=0)):
y=0.20
elif ((x>0) & (x<=2)):
y=0.20
elif ((x>2) & (x<=4)):
y=0.05
elif x>4:
y=0
return y
pwl_6_der= vectorize(piecewise_linear_6_der)
%%time
neuralnet(pwl_6,pwl_6_der,20000)
###Output
Epoch 0 cost: 0.8845800900977527
Epoch 1000 cost: 0.45668748796353487
Epoch 2000 cost: 0.4472940349761423
Epoch 3000 cost: 0.4404678840499182
Epoch 4000 cost: 0.43131146801038267
Epoch 5000 cost: 0.4202072601093601
Epoch 6000 cost: 0.4127197498529249
Epoch 7000 cost: 0.4064536825779002
Epoch 8000 cost: 0.40076633994874694
Epoch 9000 cost: 0.39675079577064404
Epoch 10000 cost: 0.39378217360506074
Epoch 11000 cost: 0.39140454235925937
Epoch 12000 cost: 0.3896018433524664
Epoch 13000 cost: 0.3882110799127636
Epoch 14000 cost: 0.38652334504557695
Epoch 15000 cost: 0.38507816425153757
Epoch 16000 cost: 0.3832563283439386
Epoch 17000 cost: 0.38097974817047975
Epoch 18000 cost: 0.37817352736390186
Epoch 19000 cost: 0.3758050315160476
Final cost: 0.37420467615110076
[[445 80]
[ 55 188]]
precision recall f1-score support
False 0.89 0.85 0.87 525
True 0.70 0.77 0.74 243
avg / total 0.83 0.82 0.83 768
Accuracy = 0.82421875
CPU times: user 5min 29s, sys: 2.64 s, total: 5min 32s
Wall time: 3min 1s
###Markdown
Piecewise linear function (k=8)
###Code
def piecewise_linear_8(x):
if x<=-4:
y=0
elif ((x>-4)&(x<=-2.6)):
y=0.05*x+0.2
elif ((x>-2.6) & (x<=-1.3)):
y=0.1*x+0.35
elif ((x>-1.3) & (x<=0)):
y=0.22*x+0.5
elif ((x>0) & (x<=1.3)):
y=0.22*x+0.5
elif ((x>1.3) & (x<=2.6)):
y=0.1*x+0.7
elif ((x>2.6) & (x<=4)):
y=0.06*x+0.7
elif x>4:
y=1
return y
pwl_8 = vectorize(piecewise_linear_8)
def piecewise_linear_8_der(x):
if x<=-4:
y=0
elif ((x>-4)&(x<=-2.6)):
y=0.05
elif ((x>-2.6) & (x<=-1.3)):
y=0.1
elif ((x>-1.3) & (x<=0)):
y=0.22
elif ((x>0) & (x<=1.3)):
y=0.22
elif ((x>1.3) & (x<=2.6)):
y=0.1
elif ((x>2.6) & (x<=4)):
y=0.06
elif x>4:
y=0
return y
pwl_8_der= vectorize(piecewise_linear_8_der)
%%time
neuralnet(pwl_8,pwl_8_der,20000)
###Output
Epoch 0 cost: 0.885528880531426
Epoch 1000 cost: 0.4577393012712097
Epoch 2000 cost: 0.44590035553928165
Epoch 3000 cost: 0.43600020959032554
Epoch 4000 cost: 0.4272419684688217
Epoch 5000 cost: 0.41861171536510666
Epoch 6000 cost: 0.41179963882483006
Epoch 7000 cost: 0.4056843564232922
Epoch 8000 cost: 0.3995863378263771
Epoch 9000 cost: 0.3937643426443278
Epoch 10000 cost: 0.3877496132620789
Epoch 11000 cost: 0.3849123071664967
Epoch 12000 cost: 0.3832507410670068
Epoch 13000 cost: 0.3798992077859069
Epoch 14000 cost: 0.37728004575979157
Epoch 15000 cost: 0.37390513436699835
Epoch 16000 cost: 0.3675005252092878
Epoch 17000 cost: 0.3636962789294785
Epoch 18000 cost: 0.3602305731935407
Epoch 19000 cost: 0.35802023689807744
Final cost: 0.35687949558350684
[[448 70]
[ 52 198]]
precision recall f1-score support
False 0.90 0.86 0.88 518
True 0.74 0.79 0.76 250
avg / total 0.84 0.84 0.84 768
Accuracy = 0.8411458333333334
CPU times: user 5min 32s, sys: 1.62 s, total: 5min 33s
Wall time: 2min 50s
###Markdown
Piecewise linear function (k=10)
###Code
def piecewise_linear_10(x):
if x<=-4:
y=0
elif ((x>-4)&(x<=-3)):
y= 0.0470*x + 0.18
elif ((x>-3) & (x<=-2)):
y= 0.0720*x + 0.26
elif ((x>-2) & (x<=-1)):
y= 0.1490*x + 0.41
elif ((x>-1) & (x<=0)):
y=0.2320*x + 0.5
elif ((x>0) & (x<=1)):
y= 0.2300*x + 0.5
elif ((x>1) & (x<=2)):
y= 0.1500*x + 0.58
elif ((x>2) & (x<=3)):
y=0.0700*x + 0.74
elif ((x>3) & (x<=4)):
y=0.050*x + 0.8
elif x>4:
y=1
return y
pwl_10 = vectorize(piecewise_linear_10)
def piecewise_linear_10_der(x):
if x<=-4:
y=0
elif ((x>-4)&(x<=-3)):
y= 0.0470
elif ((x>-3) & (x<=-2)):
y= 0.0720
elif ((x>-2) & (x<=-1)):
y= 0.1490
elif ((x>-1) & (x<=0)):
y=0.2320
elif ((x>0) & (x<=1)):
y= 0.2300
elif ((x>1) & (x<=2)):
y= 0.1500
elif ((x>2) & (x<=3)):
y=0.0700
elif ((x>3) & (x<=4)):
y=0.050
elif x>4:
y=0
return y
pwl_10_der= vectorize(piecewise_linear_10_der)
%%time
neuralnet(pwl_10,pwl_10_der,20000)
###Output
Epoch 0 cost: 0.8858123004631412
Epoch 1000 cost: 0.45605690205512406
Epoch 2000 cost: 0.4445590631573435
Epoch 3000 cost: 0.4337201536802002
Epoch 4000 cost: 0.4230871808324803
Epoch 5000 cost: 0.41587594796870225
Epoch 6000 cost: 0.40988587814111593
Epoch 7000 cost: 0.40528206115550686
Epoch 8000 cost: 0.40126895059329715
Epoch 9000 cost: 0.39715026310669505
Epoch 10000 cost: 0.39294127883835844
Epoch 11000 cost: 0.38943064738912875
Epoch 12000 cost: 0.38538741192259307
Epoch 13000 cost: 0.3819462800740605
Epoch 14000 cost: 0.37830605505415793
Epoch 15000 cost: 0.37464583574960786
Epoch 16000 cost: 0.36998047484500085
Epoch 17000 cost: 0.3636173443827898
Epoch 18000 cost: 0.35770375716542924
Epoch 19000 cost: 0.3516327188058102
Final cost: 0.34580556002841367
[[446 74]
[ 54 194]]
precision recall f1-score support
False 0.89 0.86 0.87 520
True 0.72 0.78 0.75 248
avg / total 0.84 0.83 0.83 768
Accuracy = 0.8333333333333334
CPU times: user 7min 19s, sys: 3.2 s, total: 7min 22s
Wall time: 3min 52s
###Markdown
Piecewise linear function (k=12)
###Code
def piecewise_linear_12(x):
if x<=-4:
y=0
elif ((x>-4)&(x<=-3.2)):
y= 0.0500*x + 0.2
elif ((x>-3.2) & (x<=-2.4)):
y=0.0500*x + 0.2
elif ((x>-2.4) & (x<=-1.6)):
y=0.1*x + 0.32
elif ((x>-1.6) & (x<=-0.8)):
y=0.175*x + 0.44
elif ((x>-0.8) & (x<=0)):
y=0.22*x + 0.5
elif ((x>0) & (x<=0.8)):
y=0.23*x + 0.5
elif ((x>0.8) & (x<=1.6)):
y=0.1250*x + 0.6
elif ((x>1.6) & (x<=2.4)):
y= 0.1000*x + 0.67
elif ((x>2.4) & (x<=3.2)):
y= 0.0625*x + 0.76
elif ((x>3.2) & (x<=4)):
y=0.05000*x + 0.8
elif x>4:
y=1
return y
pwl_12 = vectorize(piecewise_linear_12)
def piecewise_linear_12_der(x):
if x<=-4:
y=0
elif ((x>-4)&(x<=-3.2)):
y= 0.0500
elif ((x>-3.2) & (x<=-2.4)):
y=0.0500
elif ((x>-2.4) & (x<=-1.6)):
y=0.1
elif ((x>-1.6) & (x<=-0.8)):
y=0.175
elif ((x>-0.8) & (x<=0)):
y=0.22
elif ((x>0) & (x<=0.8)):
y=0.23
elif ((x>0.8) & (x<=1.6)):
y=0.1250
elif ((x>1.6) & (x<=2.4)):
y= 0.1000
elif ((x>2.4) & (x<=3.2)):
y= 0.0625
elif ((x>3.2) & (x<=4)):
y=0.05000
elif x>4:
y=0
return y
pwl_12_der= vectorize(piecewise_linear_12_der)
%%time
neuralnet(pwl_12,pwl_12_der,20000)
###Output
Epoch 0 cost: 0.8853227618385362
Epoch 1000 cost: 0.4578301327718933
Epoch 2000 cost: 0.4466649404017021
Epoch 3000 cost: 0.43750692110803346
Epoch 4000 cost: 0.4288830796774297
Epoch 5000 cost: 0.42056331987016526
Epoch 6000 cost: 0.4121170259154292
Epoch 7000 cost: 0.405692214997865
Epoch 8000 cost: 0.3985543392393636
Epoch 9000 cost: 0.39140146326363806
Epoch 10000 cost: 0.386552223116022
Epoch 11000 cost: 0.3834075073049397
Epoch 12000 cost: 0.3805250341993179
Epoch 13000 cost: 0.37807657085207114
Epoch 14000 cost: 0.37609876184433794
Epoch 15000 cost: 0.37457657217177254
Epoch 16000 cost: 0.3727741334225094
Epoch 17000 cost: 0.3700109458545325
Epoch 18000 cost: 0.3680779861216945
Epoch 19000 cost: 0.3661408775073516
Final cost: 0.3638665612245389
[[449 81]
[ 51 187]]
precision recall f1-score support
False 0.90 0.85 0.87 530
True 0.70 0.79 0.74 238
avg / total 0.84 0.83 0.83 768
Accuracy = 0.828125
CPU times: user 6min 55s, sys: 1.78 s, total: 6min 56s
Wall time: 3min 31s
###Markdown
Piecewise linear function (k=14)
###Code
def piecewise_linear_14(x):
if x<=-4:
y=0
elif ((x>-4)&(x<=-3.3)):
y= 0.05*x + 0.2
elif ((x>-3.3) & (x<=-2.64)):
y= 0.04697*x + 0.19
elif ((x>-2.64) & (x<=-1.98)):
y= 0.08333*x + 0.28
elif ((x>-1.98) & (x<=-1.32)):
y= 0.1348*x + 0.38
elif ((x>-1.32) & (x<=-0.66)):
y=0.1970*x + 0.47
elif ((x>-0.66) & (x<=0)):
y=0.2424*x + 0.5
elif ((x>0) & (x<=0.66)):
y=0.2273*x + 0.5
elif ((x>0.66) & (x<=1.32)):
y= 0.1970*x + 0.5
elif ((x>1.32) & (x<=1.98)):
y=0.1364*x + 0.6
elif ((x>1.98) & (x<=2.64)):
y=0.09091*x + 0.69
elif ((x>2.64) & (x<=3.3)):
y= 0.04545*x + 0.8
elif ((x>3.3) & (x<=4)):
y=0.05714*x + 0.77
elif x>4:
y=1
return y
pwl_14 = vectorize(piecewise_linear_14)
def piecewise_linear_14_der(x):
if x<=-4:
y=0
elif ((x>-4)&(x<=-3.3)):
y= 0.05
elif ((x>-3.3) & (x<=-2.64)):
y= 0.04697
elif ((x>-2.64) & (x<=-1.98)):
y= 0.08333
elif ((x>-1.98) & (x<=-1.32)):
y= 0.1348
elif ((x>-1.32) & (x<=-0.66)):
y=0.1970
elif ((x>-0.66) & (x<=0)):
y=0.2424
elif ((x>0) & (x<=0.66)):
y=0.2273
elif ((x>0.66) & (x<=1.32)):
y= 0.1970
elif ((x>1.32) & (x<=1.98)):
y=0.1364
elif ((x>1.98) & (x<=2.64)):
y=0.09091
elif ((x>2.64) & (x<=3.3)):
y= 0.04545
elif ((x>3.3) & (x<=4)):
y=0.05714
elif x>4:
y=0
return y
pwl_14_der= vectorize(piecewise_linear_14_der)
%%time
neuralnet(pwl_14,pwl_14_der,20000)
###Output
Epoch 0 cost: 0.8827934639783532
Epoch 1000 cost: 0.4558364304298477
Epoch 2000 cost: 0.4453638610000218
Epoch 3000 cost: 0.4361698046572098
Epoch 4000 cost: 0.42631015509825604
Epoch 5000 cost: 0.4170864096260227
Epoch 6000 cost: 0.4088598596413058
Epoch 7000 cost: 0.40306864332237113
Epoch 8000 cost: 0.3981013672689292
Epoch 9000 cost: 0.3929929482852894
Epoch 10000 cost: 0.3884021990992073
Epoch 11000 cost: 0.38418157729846253
Epoch 12000 cost: 0.38095774576871283
Epoch 13000 cost: 0.3774859505408159
Epoch 14000 cost: 0.37446981952514946
Epoch 15000 cost: 0.3717662552614881
Epoch 16000 cost: 0.3688984227237718
Epoch 17000 cost: 0.3652597168303922
Epoch 18000 cost: 0.3613616381484808
Epoch 19000 cost: 0.3564310378748857
Final cost: 0.3519905505345262
[[452 71]
[ 48 197]]
precision recall f1-score support
False 0.90 0.86 0.88 523
True 0.74 0.80 0.77 245
avg / total 0.85 0.85 0.85 768
Accuracy = 0.8450520833333334
CPU times: user 8min, sys: 2.8 s, total: 8min 3s
Wall time: 4min 11s
###Markdown
Results on training set onlyThis is an implementation of a single hidden layer neural network d-2d-1 with d inputs neurons, 2d hidden layer neurons and 1 output neuron. I have trained the model on PIMA Indians Diabetes dataset(Link:-https://www.kaggle.com/uciml/pima-indians-diabetes-database) from UCI Machine learning repository. The accuracy and metrics reported are training set accuracy only.After 20000 epochs, the following results(accuracy) were obtained:-1. Sigmoid activation- 0.863281252. Piecewise Linear Fucntion activation (k=4) - 0.78383. Piecewise Linear Fucntion activation (k=6) - 0.824218754. Piecewise Linear Fucntion activation (k=8) - 0.84115. Piecewise Linear Fucntion activation (k=10) - 0.83336. Piecewise Linear Fucntion activation (k=12) - 0.8281257. Piecewise Linear Fucntion activation (k=14) - 0.8450 Cost after 20,000 epochs is the following:- 1. Sigmoid activation- 0.3052. Piecewise Linear Fucntion activation (k=4) - 0.4793. Piecewise Linear Fucntion activation (k=6) - 0.3742044. Piecewise Linear Fucntion activation (k=8) - 0.35685. Piecewise Linear Fucntion activation (k=10) - 0.34586. Piecewise Linear Fucntion activation (k=12) - 0.36387. Piecewise Linear Fucntion activation (k=14) - 0.3519 Time taken for 20000 epochs training 1. Sigmoid activation- 10.7 s2. Piecewise Linear Fucntion activation (k=4) - 4min 20s3. Piecewise Linear Fucntion activation (k=6) - 5min 32s4. Piecewise Linear Fucntion activation (k=8) - 5min 33s5. Piecewise Linear Fucntion activation (k=10) - 7min 22s6. Piecewise Linear Fucntion activation (k=12) - 6min 56s 7. Piecewise Linear Fucntion activation (k=14) - 8min 3s With Test set
###Code
X_train_1, X_test_1, y_train_1, y_test_1 = train_test_split(data_x_scaled, data_y, test_size=0.20, random_state=42)
X_train_1= X_train_1.T
y_train_1= y_train_1.reshape(1,614)
X_test_1 =X_test_1.T
y_test_1 =y_test_1.reshape(1,154)
def neuralnet_1(activation,derivative,epochs):
X = X_train_1
Y = y_train_1
np.random.seed(101)
W1 = np.random.randn(n_h, n_x)*np.sqrt(2/n_x)
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(1, n_h)*np.sqrt(2/n_h)
b2 = np.zeros((1, 1))
act=activation
der=derivative
for i in range(epochs):
Z1 = np.matmul(W1, X) + b1
A1 = act(Z1)
Z2 = np.matmul(W2, A1) + b2
A2 = sigmoid(Z2)
cost = compute_loss(Y, A2)
dZ2 = A2-Y
dW2 = (1./m) * np.matmul(dZ2, A1.T)
db2 = (1./m) * np.sum(dZ2, axis=1, keepdims=True)
dA1 = np.matmul(W2.T, dZ2)
dZ1 = dA1* der(Z1)
dW1 = (1./m) * np.matmul(dZ1, X.T)
db1 = (1./m) * np.sum(dZ1, axis=1, keepdims=True)
W2 = W2 - learning_rate * dW2
b2 = b2 - learning_rate * db2
W1 = W1 - learning_rate * dW1
b1 = b1 - learning_rate * db1
if i % 1000 == 0:
print("Epoch", i, "cost: ", cost)
print("Final cost:", cost)
Z1 = np.matmul(W1, X_test_1) + b1
A1 = act(Z1)
Z2 = np.matmul(W2, A1) + b2
A2 = sigmoid(Z2)
predictions = (A2>.5)[0,:]
labels = (y_test_1 == 1)[0,:]
print(confusion_matrix(predictions, labels))
print(classification_report(predictions, labels))
print('Accuracy =' , accuracy_score(labels, predictions))
###Output
_____no_output_____
###Markdown
Sigmoid hidden layer
###Code
%%time
neuralnet_1(sigmoid,der_sigmoid,20000)
###Output
Epoch 0 cost: 0.8826177564744794
Epoch 1000 cost: 0.45170029298052283
Epoch 2000 cost: 0.4385031317000155
Epoch 3000 cost: 0.42970420312527746
Epoch 4000 cost: 0.4193740501574361
Epoch 5000 cost: 0.409145913610257
Epoch 6000 cost: 0.4000822375231552
Epoch 7000 cost: 0.3921565649483339
Epoch 8000 cost: 0.38529754319909293
Epoch 9000 cost: 0.37923638037672813
Epoch 10000 cost: 0.3738248510785002
Epoch 11000 cost: 0.3690010746886034
Epoch 12000 cost: 0.36468415355047584
Epoch 13000 cost: 0.3605253038592752
Epoch 14000 cost: 0.3557673360323362
Epoch 15000 cost: 0.3495705478068931
Epoch 16000 cost: 0.34210663100888056
Epoch 17000 cost: 0.33360668381259523
Epoch 18000 cost: 0.3249612049049551
Epoch 19000 cost: 0.31668150971719067
Final cost: 0.30871103609999945
[[78 24]
[21 31]]
precision recall f1-score support
False 0.79 0.76 0.78 102
True 0.56 0.60 0.58 52
avg / total 0.71 0.71 0.71 154
Accuracy = 0.7077922077922078
CPU times: user 8.8 s, sys: 65 ms, total: 8.87 s
Wall time: 4.53 s
###Markdown
Piecewise linear function (k=4)
###Code
%%time
neuralnet_1(pwl_4,pwl_4_der,20000)
###Output
Epoch 0 cost: 0.8818873746538853
Epoch 1000 cost: 0.5096882101814671
Epoch 2000 cost: 0.5096881562640755
Epoch 3000 cost: 0.5096881561844885
Epoch 4000 cost: 0.509688156184371
Epoch 5000 cost: 0.5096881561843708
Epoch 6000 cost: 0.5096881561843708
Epoch 7000 cost: 0.5096881561843708
Epoch 8000 cost: 0.5096881561843708
Epoch 9000 cost: 0.5096881561843708
Epoch 10000 cost: 0.5096881561843708
Epoch 11000 cost: 0.5096881561843708
Epoch 12000 cost: 0.5096881561843708
Epoch 13000 cost: 0.5096881561843708
Epoch 14000 cost: 0.5096881561843708
Epoch 15000 cost: 0.5096881561843708
Epoch 16000 cost: 0.5096881561843708
Epoch 17000 cost: 0.5096881561843708
Epoch 18000 cost: 0.5096881561843708
Epoch 19000 cost: 0.5096881561843708
Final cost: 0.5096881561843708
[[71 23]
[28 32]]
precision recall f1-score support
False 0.72 0.76 0.74 94
True 0.58 0.53 0.56 60
avg / total 0.66 0.67 0.67 154
Accuracy = 0.6688311688311688
CPU times: user 3min 35s, sys: 924 ms, total: 3min 36s
Wall time: 1min 49s
###Markdown
Piecewise linear function (k=6)
###Code
%%time
neuralnet_1(pwl_6,pwl_6_der,20000)
###Output
Epoch 0 cost: 0.8805200707831126
Epoch 1000 cost: 0.45205530532456817
Epoch 2000 cost: 0.441360584862105
Epoch 3000 cost: 0.43400240477653873
Epoch 4000 cost: 0.4272213577895457
Epoch 5000 cost: 0.41921008514503216
Epoch 6000 cost: 0.4107145709930113
Epoch 7000 cost: 0.40203857847085617
Epoch 8000 cost: 0.3932722540553957
Epoch 9000 cost: 0.38600584709985003
Epoch 10000 cost: 0.38036655240216477
Epoch 11000 cost: 0.3760641168854932
Epoch 12000 cost: 0.37245558127917394
Epoch 13000 cost: 0.3696482375747014
Epoch 14000 cost: 0.3663774690733062
Epoch 15000 cost: 0.36395092358440573
Epoch 16000 cost: 0.36219071463497976
Epoch 17000 cost: 0.3604252934497303
Epoch 18000 cost: 0.35902947223199083
Epoch 19000 cost: 0.35772572399515473
Final cost: 0.3562666164489959
[[79 20]
[20 35]]
precision recall f1-score support
False 0.80 0.80 0.80 99
True 0.64 0.64 0.64 55
avg / total 0.74 0.74 0.74 154
Accuracy = 0.7402597402597403
CPU times: user 4min 14s, sys: 1.82 s, total: 4min 16s
Wall time: 2min 11s
###Markdown
Piecewise linear function (k=8)
###Code
%%time
neuralnet_1(pwl_8,pwl_8_der,20000)
###Output
Epoch 0 cost: 0.8814912052405025
Epoch 1000 cost: 0.4546171818773781
Epoch 2000 cost: 0.4399101995246463
Epoch 3000 cost: 0.4313272956257116
Epoch 4000 cost: 0.4204200115279583
Epoch 5000 cost: 0.41062982712781537
Epoch 6000 cost: 0.40168202038228784
Epoch 7000 cost: 0.3949943873818688
Epoch 8000 cost: 0.3901877602786974
Epoch 9000 cost: 0.38465107136801735
Epoch 10000 cost: 0.3797864857167854
Epoch 11000 cost: 0.6563289977481872
Epoch 12000 cost: 0.5960015112786757
Epoch 13000 cost: 0.586713604643444
Epoch 14000 cost: 0.5825164155643267
Epoch 15000 cost: 0.5807423264372055
Epoch 16000 cost: 0.5800128807131464
Epoch 17000 cost: 0.5797007152311695
Epoch 18000 cost: 0.5795580990363449
Epoch 19000 cost: 0.5794875081016301
Final cost: 0.5794488022547432
[[72 21]
[27 34]]
precision recall f1-score support
False 0.73 0.77 0.75 93
True 0.62 0.56 0.59 61
avg / total 0.68 0.69 0.69 154
Accuracy = 0.6883116883116883
CPU times: user 4min 39s, sys: 1.96 s, total: 4min 41s
Wall time: 2min 24s
###Markdown
Piecewise linear function (k=10)
###Code
%%time
neuralnet_1(pwl_10,pwl_10_der,20000)
###Output
Epoch 0 cost: 0.8815936010343659
Epoch 1000 cost: 0.4525293510224265
Epoch 2000 cost: 0.43937176820616497
Epoch 3000 cost: 0.43041317385226546
Epoch 4000 cost: 0.4205590734562757
Epoch 5000 cost: 0.6248654724606356
Epoch 6000 cost: 0.6121170938649125
Epoch 7000 cost: 0.6077005203980033
Epoch 8000 cost: 0.6055430816274949
Epoch 9000 cost: 0.6044051922882407
Epoch 10000 cost: 0.6037455462549146
Epoch 11000 cost: 0.6033198915740974
Epoch 12000 cost: 0.6030200097013609
Epoch 13000 cost: 0.6027953832413481
Epoch 14000 cost: 0.6026200930600106
Epoch 15000 cost: 0.6024794183862809
Epoch 16000 cost: 0.6023642231045625
Epoch 17000 cost: 0.6022684284949416
Epoch 18000 cost: 0.6021877741513265
Epoch 19000 cost: 0.6021191569414158
Final cost: 0.6020603052773791
[[28 3]
[71 52]]
precision recall f1-score support
False 0.28 0.90 0.43 31
True 0.95 0.42 0.58 123
avg / total 0.81 0.52 0.55 154
Accuracy = 0.5194805194805194
CPU times: user 6min 1s, sys: 2.47 s, total: 6min 4s
Wall time: 3min 7s
###Markdown
Piecewise linear function (k=12)
###Code
%%time
neuralnet_1(pwl_12,pwl_12_der,20000)
###Output
Epoch 0 cost: 0.8811576177900142
Epoch 1000 cost: 0.4541869243068809
Epoch 2000 cost: 0.4409038565480759
Epoch 3000 cost: 0.4331581045766795
Epoch 4000 cost: 0.42487246710201687
Epoch 5000 cost: 0.416248694995148
Epoch 6000 cost: 0.4093766562273735
Epoch 7000 cost: 0.4023850311445431
Epoch 8000 cost: 0.39741628194357326
Epoch 9000 cost: 0.3921043213540893
Epoch 10000 cost: 0.3872863033188964
Epoch 11000 cost: 0.38383943766158835
Epoch 12000 cost: 0.38024700192491295
Epoch 13000 cost: 0.3758601939898355
Epoch 14000 cost: 0.37189778300892706
Epoch 15000 cost: 0.3685503787635239
Epoch 16000 cost: 0.3656577105454712
Epoch 17000 cost: 0.36316801419728384
Epoch 18000 cost: 0.35763463640194976
Epoch 19000 cost: 0.352755244390978
Final cost: 0.3492045974961631
[[77 19]
[22 36]]
precision recall f1-score support
False 0.78 0.80 0.79 96
True 0.65 0.62 0.64 58
avg / total 0.73 0.73 0.73 154
Accuracy = 0.7337662337662337
CPU times: user 5min 41s, sys: 2.1 s, total: 5min 44s
Wall time: 2min 56s
###Markdown
Piecewise linear function (k=14)
###Code
%%time
neuralnet_1(pwl_14,pwl_14_der,20000)
###Output
Epoch 0 cost: 0.8785852067282991
Epoch 1000 cost: 0.4526077923927907
Epoch 2000 cost: 0.4403286311070453
Epoch 3000 cost: 0.4318828588635498
Epoch 4000 cost: 0.4224250485865676
Epoch 5000 cost: 0.4124234578029452
Epoch 6000 cost: 0.4050502529469068
Epoch 7000 cost: 0.39841341039899936
Epoch 8000 cost: 0.39172178853051226
Epoch 9000 cost: 0.3863757827098945
Epoch 10000 cost: 0.3804623480727383
Epoch 11000 cost: 0.3750809461901533
Epoch 12000 cost: 0.3714253340080512
Epoch 13000 cost: 0.3668899089941519
Epoch 14000 cost: 0.36324616314279035
Epoch 15000 cost: 0.3591284116917878
Epoch 16000 cost: 0.3558077463873029
Epoch 17000 cost: 0.6182622816557671
Epoch 18000 cost: 0.5928721070989912
Epoch 19000 cost: 0.5862600044838332
Final cost: 0.5832806632194709
[[82 33]
[17 22]]
precision recall f1-score support
False 0.83 0.71 0.77 115
True 0.40 0.56 0.47 39
avg / total 0.72 0.68 0.69 154
Accuracy = 0.6753246753246753
CPU times: user 6min 36s, sys: 2.57 s, total: 6min 38s
Wall time: 3min 24s
|
examples/ITK_UnitTestExample2_AffineRegistration.ipynb | ###Markdown
ElastixThis notebooks show very basic image registration examples with on-the-fly generated binary images.
###Code
import itk
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Image generators
###Code
def image_generator(x1, x2, y1, y2):
image = np.zeros([100, 100], np.float32)
image[y1:y2, x1:x2] = 1
image = itk.image_view_from_array(image)
return image
###Output
_____no_output_____
###Markdown
Affine Test
###Code
# Create test images
fixed_image_affine = image_generator(25,75,25,75)
moving_image_affine = image_generator(1,71,1,91)
# Import Default Parameter Map
parameter_object = itk.ParameterObject.New()
default_affine_parameter_map = parameter_object.GetDefaultParameterMap('affine',4)
default_affine_parameter_map['FinalBSplineInterpolationOrder'] = ['0']
parameter_object.AddParameterMap(default_affine_parameter_map)
# Call registration function
result_image_affine, result_transform_parameters = itk.elastix_registration_method(
fixed_image_affine, moving_image_affine,
parameter_object=parameter_object,
log_to_console=True)
###Output
_____no_output_____
###Markdown
Visualization Affine Test
###Code
%matplotlib inline
# Plot images
fig, axs = plt.subplots(1,3, sharey=True, figsize=[30,30])
plt.figsize=[100,100]
axs[0].imshow(result_image_affine)
axs[0].set_title('Result', fontsize=30)
axs[1].imshow(fixed_image_affine)
axs[1].set_title('Fixed', fontsize=30)
axs[2].imshow(moving_image_affine)
axs[2].set_title('Moving', fontsize=30)
plt.show()
###Output
_____no_output_____
###Markdown
ElastixThis notebooks show very basic image registration examples with on-the-fly generated binary images.
###Code
from itk import itkElastixRegistrationMethodPython
from itk import itkTransformixFilterPython
import itk
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Image generators
###Code
def image_generator(x1, x2, y1, y2):
image = np.zeros([100, 100], np.float32)
image[y1:y2, x1:x2] = 1
image = itk.image_view_from_array(image)
return image
###Output
_____no_output_____
###Markdown
Affine Test
###Code
# Create test images
fixed_image_affine = image_generator(25,75,25,75)
moving_image_affine = image_generator(1,71,1,91)
# Import Default Parameter Map
parameter_object = itk.ParameterObject.New()
default_affine_parameter_map = parameter_object.GetDefaultParameterMap('affine',4)
default_affine_parameter_map['FinalBSplineInterpolationOrder'] = ['0']
parameter_object.AddParameterMap(default_affine_parameter_map)
# Call registration function
result_image_affine, result_transform_parameters = itk.elastix_registration_method(
fixed_image_affine, moving_image_affine,
parameter_object=parameter_object,
log_to_console=True)
###Output
_____no_output_____
###Markdown
Visualization Affine Test
###Code
%matplotlib inline
# Plot images
fig, axs = plt.subplots(1,3, sharey=True, figsize=[30,30])
plt.figsize=[100,100]
axs[0].imshow(result_image_affine)
axs[0].set_title('Result', fontsize=30)
axs[1].imshow(fixed_image_affine)
axs[1].set_title('Fixed', fontsize=30)
axs[2].imshow(moving_image_affine)
axs[2].set_title('Moving', fontsize=30)
plt.show()
###Output
_____no_output_____
###Markdown
ElastixThis notebooks show very basic image registration examples with on-the-fly generated binary images.
###Code
from itk import itkElastixRegistrationMethodPython
from itk import itkTransformixFilterPython
import itk
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Image generators
###Code
def image_generator(x1, x2, y1, y2):
image = np.zeros([100, 100], np.float32)
image[y1:y2, x1:x2] = 1
image = itk.image_view_from_array(image)
return image
###Output
_____no_output_____
###Markdown
Affine Test
###Code
# Create test images
fixed_image_affine = image_generator(25,75,25,75)
moving_image_affine = image_generator(1,71,1,91)
# Import Default Parameter Map
parameter_object = itk.ParameterObject.New()
default_affine_parameter_map = parameter_object.GetDefaultParameterMap('affine',4)
default_affine_parameter_map['FinalBSplineInterpolationOrder'] = ['0']
parameter_object.AddParameterMap(default_affine_parameter_map)
# Call registration function
result_image_affine, result_transform_parameters = itk.elastix_registration_method(
fixed_image_affine, moving_image_affine,
parameter_object=parameter_object,
log_to_console=True)
###Output
_____no_output_____
###Markdown
Visualization Affine Test
###Code
%matplotlib inline
# Plot images
fig, axs = plt.subplots(1,3, sharey=True, figsize=[30,30])
plt.figsize=[100,100]
axs[0].imshow(result_image_affine)
axs[0].set_title('Result', fontsize=30)
axs[1].imshow(fixed_image_affine)
axs[1].set_title('Fixed', fontsize=30)
axs[2].imshow(moving_image_affine)
axs[2].set_title('Moving', fontsize=30)
plt.show()
###Output
_____no_output_____
###Markdown
ElastixThis notebooks show very basic image registration examples with on-the-fly generated binary images.
###Code
from itk import itkElastixRegistrationMethodPython
from itk import itkTransformixFilterPython
import itk
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Image generators
###Code
def image_generator(x1, x2, y1, y2, upsampled=False, bspline=False,
mask=False, artefact=False):
if upsampled:
image = np.zeros([1000, 1000], np.float32)
elif mask:
image = np.zeros([100, 100], np.uint8)
else:
image = np.zeros([100, 100], np.float32)
for x in range(x1, x2):
for y in range(y1, y2):
if bspline:
y += x
if x > 99 or y > 99:
pass
else:
image[x, y] = 1
else:
image[x, y] = 1
if artefact:
image[:, -10:] = 1
image = itk.image_view_from_array(image)
return image
###Output
_____no_output_____
###Markdown
Affine Test
###Code
# Create test images
fixed_image_affine = image_generator(25,75,25,75)
moving_image_affine = image_generator(1,71,1,91)
# Import Default Parameter Map
parameter_object = itk.ParameterObject.New()
default_affine_parameter_map = parameter_object.GetDefaultParameterMap('affine',4)
parameter_object.AddParameterMap(default_affine_parameter_map)
# Call registration function
result_image_affine, result_transform_parameters = itk.elastix_registration_method(
fixed_image_affine, moving_image_affine,
parameter_object=parameter_object,
log_to_console=True)
###Output
_____no_output_____
###Markdown
Visualization Affine Test
###Code
%matplotlib inline
# Plot images
fig, axs = plt.subplots(1,3, sharey=True, figsize=[30,30])
plt.figsize=[100,100]
axs[0].imshow(result_image_affine)
axs[0].set_title('Result', fontsize=30)
axs[1].imshow(fixed_image_affine)
axs[1].set_title('Fixed', fontsize=30)
axs[2].imshow(moving_image_affine)
axs[2].set_title('Moving', fontsize=30)
plt.show()
###Output
_____no_output_____ |
snippets/sktime.ipynb | ###Markdown
Table of ContentsDataAirline datasetTrain-test splitForecasting horizonForecastersNaiveForecasterEnsembleForecaster Evaluation of **sktime** for time series forecasting:* [sktime](https://github.com/alan-turing-institute/sktime)* [Sktime: a Unified Python Library for Time Series Machine Learning](https://towardsdatascience.com/sktime-a-unified-python-library-for-time-series-machine-learning-3c103c139a55)
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (15, 7)
###Output
_____no_output_____
###Markdown
Data Airline dataset
###Code
from sktime.datasets import load_airline
from sktime.forecasting.model_selection import temporal_train_test_split
# load pandas Series data
s_data = load_airline()
s_data.describe()
###Output
_____no_output_____
###Markdown
Train-test split
###Code
from sktime.utils.plotting import plot_series
y_train, y_test = temporal_train_test_split(s_data)
_, ax = plot_series(y_train, y_test, labels=["train", "test"]);
ax.set_title('Number of airline passengers');
###Output
_____no_output_____
###Markdown
Forecasting horizon
###Code
from sktime.forecasting.base import ForecastingHorizon
horizon = ForecastingHorizon(np.arange(1, len(y_test) + 1))
###Output
_____no_output_____
###Markdown
Forecasters NaiveForecaster
###Code
from sktime.forecasting.naive import NaiveForecaster
forecasters = [('naive last', NaiveForecaster(strategy='last')),
('sesonal last', NaiveForecaster(strategy='last', sp=12)),
('mean', NaiveForecaster(strategy='mean', sp=12)),
('drift', NaiveForecaster(strategy='drift')),
]
sets = {'train': y_train, 'test': y_test}
for name, forecaster in forecasters:
forecaster.fit(y_train)
y_pred = forecaster.predict(horizon)
sets[f'forecaster: {name}'] = y_pred
_, ax = plot_series(*sets.values(), labels=sets.keys())
ax.set_title('Number of airline passengers')
###Output
_____no_output_____
###Markdown
EnsembleForecaster
###Code
from sktime.forecasting.compose import EnsembleForecaster
forecaster = EnsembleForecaster(forecasters)
forecaster.fit(y_train)
y_pred = forecaster.predict(horizon)
_, ax = plot_series(y_train, y_test, y_pred, labels=[
"train", "test", "EnsembleForecaster"])
ax.set_title('Number of airline passengers')
###Output
_____no_output_____ |
lagouSpider/data-analysis.ipynb | ###Markdown
导入库
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
导入数据查看数据信息
###Code
df = pd.read_csv(r'深圳_数据分析.csv')
df.head()
# 查看缺失值,技能要求,行政区有缺失值
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 255 entries, 0 to 254
Data columns (total 14 columns):
职位名 255 non-null object
公司名 255 non-null object
公司规模 255 non-null object
公司主要业务 255 non-null object
融资情况 255 non-null object
公司待遇福利 255 non-null object
职位类型 255 non-null object
技能要求 198 non-null object
职位发布时间 255 non-null object
行政区 255 non-null object
薪水 255 non-null object
工作经验 255 non-null object
工作性质 255 non-null object
学历要求 255 non-null object
dtypes: object(14)
memory usage: 28.0+ KB
###Markdown
公司规模按人数多少从高到低画柱状图
###Code
company_scale = df['公司规模']
company_scale.unique()
tmp = company_scale.value_counts()
x = list(tmp.index)
y = list(tmp)
from pyecharts.commons import utils # 执行 JavaScript 用
from pyecharts import options as opts
from pyecharts.charts import Bar
bar = Bar()
bar.add_xaxis(x)
bar.add_yaxis("人数", y, category_gap="60%")
bar.set_series_opts(itemstyle_opts={
"normal": {
"color": utils.JsCode("""new echarts.graphic.LinearGradient(0, 0, 0, 1, [{
offset: 0,
color: 'rgba(0, 244, 255, 1)'
}, {
offset: 1,
color: 'rgba(0, 77, 167, 1)'
}], false)"""),
"barBorderRadius": [30, 30, 30, 30],
"shadowColor": 'rgb(0, 160, 221)',
}})
bar.set_global_opts(title_opts=opts.TitleOpts(title="公司人数规模"))
bar.render_notebook()
###Output
_____no_output_____
###Markdown
主要业务词云
###Code
major_business = df['公司主要业务']
major_business.head()
major_business.unique()
# 统计关键字
tmp = {}
for t in list(major_business):
t = str(t)
if '|' in t:
t = t.replace('丨',',')
if '/' in t:
t = t.replace('/',',')
if ' ' in t:
t = t.replace(' ',',')
if '、' in t:
t = t.replace('、',',')
if ',' in t:
t = t.replace(',',',')
t = t.split(',')
for _ in t:
tmp[_] = tmp.get(_,0) + 1
tmp
from pyecharts import options as opts
from pyecharts.charts import Page, WordCloud
from pyecharts.globals import SymbolType
words = list(tmp.items())
def wordcloud_base() -> WordCloud:
c = (
WordCloud()
.add("", words, word_size_range=[20, 100])
.set_global_opts(title_opts=opts.TitleOpts(title="公司业务词云图"))
)
return c
c = wordcloud_base()
c.render_notebook()
###Output
_____no_output_____
###Markdown
融资情况柱状图
###Code
financing_condition = df['融资情况']
financing_condition.head()
financing_condition.unique()
tmp = financing_condition.value_counts()
tmp
x = list(tmp.index)
y = list(tmp)
from pyecharts.commons import utils # 执行 JavaScript 用
from pyecharts import options as opts
from pyecharts.charts import Bar
bar = Bar()
bar.add_xaxis(x)
bar.add_yaxis("数量",y, category_gap="60%")
bar.set_series_opts(itemstyle_opts={
"normal": {
"color": utils.JsCode("""new echarts.graphic.LinearGradient(0, 0, 0, 1, [{
offset: 0,
color: 'rgba(0, 244, 255, 1)'
}, {
offset: 1,
color: 'rgba(0, 77, 167, 1)'
}], false)"""),
"barBorderRadius": [30, 30, 30, 30],
"shadowColor": 'rgb(0, 160, 221)',
}})
bar.set_global_opts(title_opts=opts.TitleOpts(title="公司融资情况分布"))
bar.render_notebook()
###Output
_____no_output_____
###Markdown
公司待遇福利词云
###Code
company_treatment = df['公司待遇福利']
company_treatment.head()
company_treatment.unique()
# 统计关键字
tmp = {}
for t in list(company_treatment):
t = str(t)
if '|' in t:
t = t.replace('丨',',')
if '/' in t:
t = t.replace('/',',')
if ' ' in t:
t = t.replace(' ',',')
if '、' in t:
t = t.replace('、',',')
if ',' in t:
t = t.replace(',',',')
t = t.split(',')
for _ in t:
tmp[_] = tmp.get(_,0) + 1
tmp
from pyecharts import options as opts
from pyecharts.charts import Page, WordCloud
from pyecharts.globals import SymbolType
words = list(tmp.items())
def wordcloud_base() -> WordCloud:
c = (
WordCloud()
.add("", words, word_size_range=[20, 100])
.set_global_opts(title_opts=opts.TitleOpts(title="公司福利待遇词云图"))
)
return c
c = wordcloud_base()
c.render_notebook()
###Output
_____no_output_____
###Markdown
技能要求词云
###Code
skill = df['技能要求']
skill.head()
skill.unique()
# 统计关键字
tmp = {}
for t in list(skill.unique()):
t = str(t)
t = t.replace('/',',')
t = t.split(',')
for _ in t:
tmp[_] = tmp.get(_,0) + 1
tmp
from pyecharts import options as opts
from pyecharts.charts import Page, WordCloud
from pyecharts.globals import SymbolType
words = list(tmp.items())
def wordcloud_base() -> WordCloud:
c = (
WordCloud()
.add("", words, word_size_range=[20, 100])
.set_global_opts(title_opts=opts.TitleOpts(title="技能要求词云图"))
)
return c
c = wordcloud_base()
c.render_notebook()
###Output
_____no_output_____
###Markdown
行政区饼状图
###Code
admin_local = df['行政区']
admin_local.head()
admin_local.unique()
tmp = admin_local.value_counts()
tmp
x = list(tmp.index)
y = list(tmp)
from pyecharts import options as opts
from pyecharts.charts import Page, Pie
def pie_base() -> Pie:
c = (
Pie()
.add("", [list(z) for z in zip(x, y)])
.set_global_opts(title_opts=opts.TitleOpts(title="岗位招聘地区分布比例"))
.set_series_opts(label_opts=opts.LabelOpts(formatter="{b}: {c}"))
.set_global_opts(
legend_opts=opts.LegendOpts(pos_left="70%"),
)
)
return c
c = pie_base()
c.render_notebook()
###Output
_____no_output_____
###Markdown
薪水柱状图,按区间
###Code
slary = df['薪水']
slary.head()
slary.unique()
tmp = slary.value_counts()
x = list(tmp.index)
y = list(tmp)
from pyecharts import options as opts
from pyecharts.charts import Bar
def bar_datazoom_slider() -> Bar:
c = (
Bar()
.add_xaxis(x)
.add_yaxis("数量", y)
.set_global_opts(
title_opts=opts.TitleOpts(title="薪水区间分布状况"),
datazoom_opts=opts.DataZoomOpts(),
)
)
return c
c = bar_datazoom_slider()
c.render_notebook()
###Output
_____no_output_____
###Markdown
工作经验柱状图
###Code
work_exp = df['工作经验']
work_exp.head()
work_exp.unique()
tmp = work_exp.value_counts()
x = list(tmp.index)
y = list(tmp)
from pyecharts.commons import utils # 执行 JavaScript 用
from pyecharts import options as opts
from pyecharts.charts import Bar
bar = Bar()
bar.add_xaxis(x)
bar.add_yaxis("数量", y, category_gap="60%")
bar.set_series_opts(itemstyle_opts={
"normal": {
"color": utils.JsCode("""new echarts.graphic.LinearGradient(0, 0, 0, 1, [{
offset: 0,
color: 'rgba(0, 244, 255, 1)'
}, {
offset: 1,
color: 'rgba(0, 77, 167, 1)'
}], false)"""),
"barBorderRadius": [30, 30, 30, 30],
"shadowColor": 'rgb(0, 160, 221)',
}})
bar.set_global_opts(title_opts=opts.TitleOpts(title="工作经验要求分布"))
bar.render_notebook()
###Output
_____no_output_____
###Markdown
工作性质饼状图
###Code
job_nature = df['工作性质']
job_nature.head()
job_nature.unique()
tmp = job_nature.value_counts()
x = list(tmp.index)
y = list(tmp)
from pyecharts import options as opts
from pyecharts.charts import Page, Pie
def pie_base() -> Pie:
c = (
Pie()
.add("", [list(z) for z in zip(x, y)])
.set_global_opts(title_opts=opts.TitleOpts(title="招聘性质"))
.set_series_opts(label_opts=opts.LabelOpts(formatter="{b}: {c}"))
)
return c
c = pie_base()
c.render_notebook()
###Output
_____no_output_____
###Markdown
学历要求柱状图
###Code
edu = df['学历要求']
edu.head()
tmp = edu.value_counts()
x = list(tmp.index)
y = list(tmp)
from pyecharts.commons import utils # 执行 JavaScript 用
from pyecharts import options as opts
from pyecharts.charts import Bar
bar = Bar()
bar.add_xaxis(x)
bar.add_yaxis("数量", y, category_gap="60%")
bar.set_series_opts(itemstyle_opts={
"normal": {
"color": utils.JsCode("""new echarts.graphic.LinearGradient(0, 0, 0, 1, [{
offset: 0,
color: 'rgba(0, 244, 255, 1)'
}, {
offset: 1,
color: 'rgba(0, 77, 167, 1)'
}], false)"""),
"barBorderRadius": [30, 30, 30, 30],
"shadowColor": 'rgb(0, 160, 221)',
}})
bar.set_global_opts(title_opts=opts.TitleOpts(title="招聘学历要求分布"))
bar.render_notebook()
###Output
_____no_output_____ |
CNN_CIFAR10_BN.ipynb | ###Markdown
###Code
# Install TensorFlow
!pip install tensorflow-gpu
try:
%tensorflow_version 2.x # Colab only.
except Exception:
pass
import tensorflow as tf
print(tf.__version__)
print(tf.test.gpu_device_name())
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
#imports
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Input, Conv2D, Dense, Flatten, Dropout, GlobalMaxPooling2D, MaxPooling2D, BatchNormalization
from tensorflow.keras.models import Model
# Load data
cifar10 = tf.keras.datasets.cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
y_train, y_test = y_train.flatten(), y_test.flatten()
print("x_train.shape:", x_train.shape)
print("y_train.shape", y_train.shape)
# number of classes
K = len(set(y_train))
print("number of classes:", K)
# Build the model using the functional API
i = Input(shape=x_train[0].shape)
# x = Conv2D(32, (3, 3), strides=2, activation='relu')(i)
# x = Conv2D(64, (3, 3), strides=2, activation='relu')(x)
# x = Conv2D(128, (3, 3), strides=2, activation='relu')(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(i)
x = BatchNormalization()(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D((2, 2))(x)
# x = Dropout(0.2)(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D((2, 2))(x)
# x = Dropout(0.2)(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D((2, 2))(x)
# x = Dropout(0.2)(x)
# x = GlobalMaxPooling2D()(x)
x = Flatten()(x)
x = Dropout(0.2)(x)
x = Dense(1024, activation='relu')(x)
x = Dropout(0.2)(x)
x = Dense(K, activation='softmax')(x)
model = Model(i, x)
model.summary()
# Compile
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=15)
# Fit with data augmentation
# Note: if you run this AFTER calling the previous model.fit(), it will CONTINUE training where it left off
batch_size = 32
data_generator = tf.keras.preprocessing.image.ImageDataGenerator(width_shift_range=0.1, height_shift_range=0.1, horizontal_flip=True)
train_generator = data_generator.flow(x_train, y_train, batch_size)
steps_per_epoch = x_train.shape[0] // batch_size
history = model.fit_generator(train_generator, validation_data=(x_test, y_test), steps_per_epoch=steps_per_epoch, epochs=50)
# Plot loss per iteration
import matplotlib.pyplot as plt
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.legend()
# Plot accuracy per iteration
plt.plot(history.history['accuracy'], label='acc')
plt.plot(history.history['val_accuracy'], label='val_acc')
plt.legend()
# Plot confusion matrix
from sklearn.metrics import confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
p_test = model.predict(x_test).argmax(axis=1)
cm = confusion_matrix(y_test, p_test)
plot_confusion_matrix(cm, list(range(10)))
# label mapping
labels = '''airplane
automobile
bird
cat
deer
dog
frog
horse
ship
truck'''.split()
# Show some misclassified examples
misclassified_idx = np.where(p_test != y_test)[0]
i = np.random.choice(misclassified_idx)
plt.imshow(x_test[i], cmap='gray')
plt.title("True label: %s Predicted: %s" % (labels[y_test[i]], labels[p_test[i]]));
###Output
_____no_output_____ |
.PCA_analysis_of_MD_simulations-1.ipynb | ###Markdown
Part 1: Comparison of trajectories of wild-type neuraminidase and of the I233R/H275Y (IRHY) double mutant.The data for this part of the workshop comes from:[Long time scale GPU dynamics reveal the mechanism of drug resistance of the dual mutant I223R/H275Y neuraminidase from H1N1-2009 influenza virus.](https://www.ncbi.nlm.nih.gov/pubmed/22574858)Woods CJ, Malaisree M, Pattarapongdilok N, Sompornpisut P, Hannongbua S, Mulholland AJ.Biochemistry. 2012 May 29;51(21):4364-75. doi: 10.1021/bi300561n.You have been provided with two trajectory files (AMBER binpos format): `wt_ca.binpos` and `irhy_ca.binpos`. The 2400 frames in each trajectory file are spaced every 200ps from 20ns to 500ns. For computational simplicity, the files have been stripped down to just the coordinates of the C-alpha atoms.Let's begin by loading the two trajectories into MDTraj trajectory objects, joining them together into a single trajectory, then visualising the dynamics:
###Code
import mdtraj as mdt
import nglview as nv
# Load the data for the wt and irhy simulations:
t_wt = mdt.load('data/wt_ca.binpos', top='data/wt_ca.pdb')
t_irhy = mdt.load('data/irhy_ca.binpos', top='data/irhy_ca.pdb')
# Combine the two sets of trajectory data into one trajectory:
trajdata = t_wt.join(t_irhy, check_topology=False)
view = nv.show_mdtraj(trajdata)
view
###Output
_____no_output_____
###Markdown
Notice that half way through the movie the protein jumps - this marks the transition from viewing the dynamics of the wild-type protein to viewing the dynamics of the mutant - the two simulations happen to have been set up in different parts of coordinate space.--- Part 1A: RMSD Analysis To begin with, we will plot the RMSD of each snaphot in each file relative to the first.Run the following cell. The python code here loads the two trajectories, defines a function to calculate and then plot RMSDs, and then applies the function to the data.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams.update({'font.size': 15}) #This sets a better default label size for plots
# Create labels for the two datasets:
datanames = ['wildtype', 'irhy mutant']
# define the plotting function:
def plot_rmsd(traj, datanames):
"""
This function takes a MDTraj trajectory and a list of data names and produces an rmsd plot.
"""
traj.superpose(traj[0]) # least squares fits each snapshot to the first.
frames_per_set = len(traj) // len(datanames) # we assume each trajectory file is the same length.
for i in range(len(datanames)):
# The next two lines do the rmsd calculation:
diff = traj.xyz[i * frames_per_set : (i + 1) * frames_per_set] - traj.xyz[0]
rmsd = np.sqrt((diff * diff).sum(axis=2).mean(axis=1))
plt.plot(rmsd, label=datanames[i]) # plot the line for this dataset on the graph.
plt.xlabel('Frame number')
plt.ylabel('RMSD (nm.)')
plt.legend(loc='lower right')
# now use the plotting function:
plot_rmsd(trajdata, datanames)
###Output
_____no_output_____
###Markdown
What do you conclude from these graphs? Do the simulations look well-equilibrated? Let's calculate the root-mean-square fluctuations (RMSFs) of the atoms in each trajectory. This will tell us which parts of the structure are most mobile, and which are most rigid. The most flexible parts of the system are likely to be those most difficult to equilibrate and to sample well.
###Code
def plot_rmsf(traj):
"""
Plots the root mean square fluctuations of the atoms in a MDTraj trajectory.
"""
diff = traj.xyz - traj.xyz.mean(axis=0)
rmsf = np.sqrt((diff * diff).sum(axis=2).mean(axis=0))
plt.xlabel('atom number')
plt.ylabel('RMSF (nm.)')
plt.plot(rmsf)
plt.figure(figsize=(15,5))
plt.subplot(121)
frames_per_set = len(trajdata) // len(datanames)
plot_rmsf(trajdata[:frames_per_set]) # the first half of the cofasu has the wt data.
plt.title(datanames[0])
plt.subplot(122)
plot_rmsf(trajdata[frames_per_set:]) # the second half of the cofasu has the irhy data.
plt.title(datanames[1])
###Output
_____no_output_____
###Markdown
The left-hand plot is for the wild-type trajectory, the right-hand one is for the irhy mutant. It's clear that in both cases the C-terminus of the protein is exceptionally dynamic, and probably the least well-sampled. Since this region is well away from the oseltamivir binding site, let's repeat the RMSD analysis, leaving residues 370 onwards out:
###Code
# Create an MDTraj trajectory for just a selection of the atoms in the system:
selection = trajdata.topology.select('resid 1 to 370')
seldata = mdt.Trajectory(trajdata.xyz[:, selection], trajdata.topology.subset(selection))
plot_rmsd(seldata, datanames)
###Output
_____no_output_____
###Markdown
Are the results as you expect? If you discount the particularly flexible C-terminus of the protein, do these simulations otherwise look well equilibrated?**_EXERCISE_**:**The RMSF plot shows that there are also quite dynamic regions towards the N-terminus of the protein. What happens if you leave these out of the RMSD analysis too? Try editing the cell above so the selection is `resid 100 to 300` and repeat the analysis. Experiment with different selections.** Part 1B: PCA AnalysisNow let's move from analysis by RMSD to analysis by PCA.The code below does a PCA analysis on both trajectory sets combined, and then plots the projection of each trajectory onto the common PC1/PC2 subspace:
###Code
from mdplus import pca
# First define a plotting function:
def plot_pca(scores, datanames, highlight=None):
"""
Plots the projection of each trajectory in the cofasu in the PC1/PC2 subspace.
If highlight is a number, this dataset is plotted in red against all others in grey.
"""
p1 = scores[:,0] # the projection of each snapshot along the first principal component
p2 = scores[:,1] # the projection along the second.
frames_per_rep = len(p1) // len(datanames) # number of frames (snapshots) in each dataset - assumed equal length
for i in range(len(datanames)):
start = i * frames_per_rep
end = (i + 1) * frames_per_rep
if highlight is None: # each dataset is plotted with a different colour
plt.plot(p1[start:end], p2[start:end], label=datanames[i])
plt.text(p1[start], p2[start], 'start')
plt.text(p1[end-1], p2[end-1], 'end')
else:
if i != highlight:
plt.plot(p1[start:end], p2[start:end], color='grey')
if highlight is not None:
start = highlight * frames_per_rep
end = (highlight + 1) * frames_per_rep
plt.plot(p1[start:end], p2[start:end], color='red', label=datanames[highlight])
plt.text(p1[start], p2[start], 'start')
plt.text(p1[end-1], p2[end-1], 'end')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.legend()
# Now use it:
selection = trajdata.topology.select('resid 1 to 370')
seldata = trajdata.atom_slice(selection)
p = pca.PCA()
scores = p.fit_transform(seldata.xyz)
plot_pca(scores, datanames)
###Output
_____no_output_____
###Markdown
Part 1: Comparison of trajectories of wild-type neuraminidase and of the I233R/H275Y (IRHY) double mutant.The data for this part of the workshop comes from:[Long time scale GPU dynamics reveal the mechanism of drug resistance of the dual mutant I223R/H275Y neuraminidase from H1N1-2009 influenza virus.](https://www.ncbi.nlm.nih.gov/pubmed/22574858)Woods CJ, Malaisree M, Pattarapongdilok N, Sompornpisut P, Hannongbua S, Mulholland AJ.Biochemistry. 2012 May 29;51(21):4364-75. doi: 10.1021/bi300561n.You have been provided with two trajectory files (AMBER binpos format): `wt_ca.binpos` and `irhy_ca.binpos`. The 2400 frames in each trajectory file are spaced every 200ps from 20ns to 500ns. For computational simplicity, the files have been stripped down to just the coordinates of the C-alpha atoms.Let's begin by loading the two trajectories into MDTraj trajectory objects, joining them together into a single trajectory, then visualising the dynamics:
###Code
import mdtraj as mdt
import nglview as nv
# Load the data for the wt and irhy simulations:
t_wt = mdt.load('data/pca_analysis/wt_ca.binpos', top='data/pca_analysis/wt_ca.pdb')
t_irhy = mdt.load('data/pca_analysis/irhy_ca.binpos', top='data/pca_analysis/irhy_ca.pdb')
# Combine the two sets of trajectory data into one trajectory:
trajdata = t_wt.join(t_irhy, check_topology=False)
view = nv.show_mdtraj(trajdata)
view
###Output
_____no_output_____
###Markdown
Notice that half way through the movie the protein jumps - this marks the transition from viewing the dynamics of the wild-type protein to viewing the dynamics of the mutant - the two simulations happen to have been set up in different parts of coordinate space.--- Part 1A: RMSD Analysis To begin with, we will plot the RMSD of each snaphot in each file relative to the first.Run the following cell. The python code here loads the two trajectories, defines a function to calculate and then plot RMSDs, and then applies the function to the data.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams.update({'font.size': 15}) #This sets a better default label size for plots
# Create labels for the two datasets:
datanames = ['wt', 'irhy']
# define the plotting function:
def plot_rmsd(traj, datanames):
"""
This function takes a MDTraj trajectory and a list of data names and produces an rmsd plot.
"""
traj.superpose(traj[0]) # least squares fits each snapshot to the first.
frames_per_set = len(traj) // len(datanames) # we assume each trajectory file is the same length.
for i in range(len(datanames)):
# The next two lines do the rmsd calculation:
diff = traj.xyz[i * frames_per_set : (i + 1) * frames_per_set] - traj.xyz[0]
rmsd = np.sqrt((diff * diff).sum(axis=2).mean(axis=1))
plt.plot(rmsd, label=datanames[i]) # plot the line for this dataset on the graph.
plt.xlabel('Frame number')
plt.ylabel('RMSD (nm.)')
plt.legend(loc='lower right')
# now use the plotting function:
plot_rmsd(trajdata, datanames)
###Output
_____no_output_____
###Markdown
What do you conclude from these graphs? Do the simulations look well-equilibrated? Let's calculate the root-mean-square fluctuations (RMSFs) of the atoms in each trajectory. This will tell us which parts of the structure are most mobile, and which are most rigid. The most flexible parts of the system are likely to be those most difficult to equilibrate and to sample well.
###Code
def plot_rmsf(traj):
"""
Plots the root mean square fluctuations of the atoms in a MDTraj trajectory.
"""
diff = traj.xyz - traj.xyz.mean(axis=0)
rmsf = np.sqrt((diff * diff).sum(axis=2).mean(axis=0))
plt.xlabel('atom number')
plt.ylabel('RMSF (nm.)')
plt.plot(rmsf)
plt.figure(figsize=(15,5))
plt.subplot(121)
frames_per_set = len(trajdata) // len(datanames)
plot_rmsf(trajdata[:frames_per_set]) # the first half of the cofasu has the wt data.
plt.title(datanames[0])
plt.subplot(122)
plot_rmsf(trajdata[frames_per_set:]) # the second half of the cofasu has the irhy data.
plt.title(datanames[1])
###Output
_____no_output_____
###Markdown
The left-hand plot is for the wild-type trajectory, the right-hand one is for the irhy mutant. It's clear that in both cases the C-terminus of the protein is exceptionally dynamic, and probably the least well-sampled. Since this region is well away from the oseltamivir binding site, let's repeat the RMSD analysis, leaving residues 370 onwards out:
###Code
# Create an MDTraj trajectory for just a selection of the atoms in the system:
selection = trajdata.topology.select('resid 1 to 370')
seldata = mdt.Trajectory(trajdata.xyz[:, selection], trajdata.topology.subset(selection))
plot_rmsd(seldata, datanames)
###Output
_____no_output_____
###Markdown
Are the results as you expect? If you discount the particularly flexible C-terminus of the protein, do these simulations otherwise look well equilibrated?**_EXERCISE_**:**The RMSF plot shows that there are also quite dynamic regions towards the N-terminus of the protein. What happens if you leave these out of the RMSD analysis too? Try editing the cell above so the selection is `resid 100 to 300` and repeat the analysis. Experiment with different selections.** Part 1B: PCA AnalysisNow let's move from analysis by RMSD to analysis by PCA.The code below does a PCA analysis on both trajectory sets combined, and then plots the projection of each trajectory onto the common PC1/PC2 subspace:
###Code
from MDPlus.analysis import pca
# First define a plotting function:
def plot_pca(pca_model, datanames, highlight=None):
"""
Plots the projection of each trajectory in the cofasu in the PC1/PC2 subspace.
If highlight is a number, this dataset is plotted in red against all others in grey.
"""
p1 = pca_model.projs[0] # the projection of each snapshot along the first principal component
p2 = pca_model.projs[1] # the projec tion along the second.
frames_per_rep = len(p1) // len(datanames) # number of frames (snapshots) in each dataset - assumed equal length
for i in range(len(datanames)):
start = i * frames_per_rep
end = (i + 1) * frames_per_rep
if highlight is None: # each dataset is plotted with a different colour
plt.plot(p1[start:end], p2[start:end], label=datanames[i])
plt.text(p1[start], p2[start], 'start')
plt.text(p1[end-1], p2[end-1], 'end')
else:
if i != highlight:
plt.plot(p1[start:end], p2[start:end], color='grey')
if highlight is not None:
start = highlight * frames_per_rep
end = (highlight + 1) * frames_per_rep
plt.plot(p1[start:end], p2[start:end], color='red', label=datanames[highlight])
plt.text(p1[start], p2[start], 'start')
plt.text(p1[end-1], p2[end-1], 'end')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.legend(loc='upper left')
# Now use it:
selection = trajdata.topology.select('resid 1 to 370')
seldata = mdt.Trajectory(trajdata.xyz[:, selection], trajdata.topology.subset(selection))
p = pca.fromtrajectory(seldata)
plot_pca(p, datanames)
###Output
_____no_output_____ |
nickel/A03_Deutsch_Algorithm.ipynb | ###Markdown
prepared by Berat Yenilen, Utku Birkan, Arda Çınar, Cenk Tüysüz and Özlem Salehi (QTurkey) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $ Deutsch's Algorithm In this notebook, we will look at one of the first problems that is solved using quantum computers with an advantage compared to classical computers. Deutsch's problemGiven a boolean function $f:\{0,1\} \rightarrow \{0, 1\}$, we say $f$ is balanced if $f(0) \neq f(1)$ and constant if $f(0) = f(1)$.Given $f:\{0,1\} \rightarrow \{0, 1\}$ as an oracle, that is we can evaluate it for an input by making queries but we can't look inside, the problem is to decide whether $f$ is constant or balanced. Oracle model of computation Suppose that your friend picks such a function $f$ and you try to guess whether it is constant or balanced. You are only allowed to ask questions like "What is $f(0)$?" Each question you ask, is a query to the function $f$. In quantum computing, many algorithms rely on this oracle model of computation and the aim is to solve some problem making as minimum queries as possible. Classical solutionGiven such a function, we need to evaluate the function *twice* to get an answer using a classical computer. Quantum solutionWe had previously established that every 'classical' logical function $f$ can be converted to an equivalent unitary operator $U_f$ (by constructing a logical quantum circuit). Now we are going to propose a quantum algorithm that evaluates $U_f$ only *once*. Algorithm We construct a 2 qubit circuit.- Set the second qubit to state $\ket{-}$ by applying $X$ and $H$ gates.- Apply $H$ to first qubit.- Apply $U_f$.- Apply $H$ to first qubit.- Measure the first qubit. If it is 0 then $f$ is constant. If it is 1, then $f$ is balanced. Analysis We start with the initial state $\ket{\psi_0} = \ket{0}\ket{0}$. Next we apply an $X$ gate to the second qubit and obtain the state $\ket{\psi_1} = \ket{0}\ket{1}.$After applying $H$ to both qubits, the first qubit is in the equal superposition state and the second qubit is now in state $\ket{-}$. \begin{align*}\ket{\psi_2} &= \left(\frac{1}{\sqrt{2}}\ket{0} +\frac{1}{\sqrt{2}}\ket{1} \right) \ket{-} \\ &= \frac{1}{\sqrt{2}}\ket{0}\ket{-} +\frac{1}{\sqrt{2}}\ket{1}\ket{-} \\ \\\hspace{-2in}\mbox{Next we apply $U_f$ to $\ket{\psi_2}$ and obtain $\ket{\psi_3}$}.\\\\\ket{\psi_3} &= U_f\left(\frac{1}{\sqrt{2}}\ket{0}\ket{-}+\frac{1}{\sqrt{2}}\ket{1}\ket{-}\right) \\&= \frac{1}{\sqrt{2}}U_f\ket{0}\ket{-}+\frac{1}{\sqrt{2}}U_f\ket{1}\ket{-} &\mbox{ Linearity of the operator.} \\&= \frac{1}{\sqrt{2}}(-1)^{f(0)}\ket{0}\ket{-}+\frac{1}{\sqrt{2}}(-1)^{f(1)}\ket{1}\ket{-} &\mbox{ By phase kickback.} \\&= \left(\frac{1}{\sqrt{2}}(-1)^{f(0)}\ket{0}+\frac{1}{\sqrt{2}}(-1)^{f(1)}\ket{1}\right)\ket{-} \\\\\end{align*} Let's focus on the first qubit. Now we will move on to vector notation as the analysis will be easier. We can express $\ket{\psi_3}$ using the following vector:$$\hspace{-3.1in} \ket{\psi_{3,0}} = \frac{1}{\sqrt{2}}\myvector{(-1)^{f(0)} \\ (-1)^{f(1)}} $$Next, we apply $H$ gate to first qubit and obtain the following state vector:$$ \hspace{-2.5in}\ket{\psi_{4,0}} =\frac{1}{\sqrt{2}}\hadamard \myvector{(-1)^{f(0)} \\ (-1)^{f(1)}}$$ $$ \hspace{-2in}=\frac{1}{2}\myvector{ (-1)^{f(0)} + (-1)^{f(1)} \\ (-1)^{f(0)} - (-1)^{f(1)} } $$ Now let's consider the two cases. - $f$ is constant:In this case $ f(0) = f(1) $ and $\ket{\psi_{4,0}}= \myvector{ (-1)^{f(0)} \\ 0 } $ and the corresponding quantum state is $\ket{\psi_{4,0}}=(-1)^{f(0)} \ket{0}$. Hence, we observe 0 with probability 1. (Since $f(0)=f(1)$, you can equivalently replace it.) - $f$ is balanced:In this case, $ f(0) \neq f(1) $ and $\ket{\psi_{4,0}}= \myvector{ 0 \\ (-1)^{f(0)} } $ and the corresponding quantum state is $\ket{\psi_{4,0}}=(-1)^{f(0)} \ket{1}$. Hence, we observe 1 with probability 1. So, we can find (with 100% certainty) whether $f$ is constant or balanced by making only a single query to function $f$. _Note: Alternatively, we could analyze the state $\left(\frac{1}{\sqrt{2}}(-1)^{f(0)}\ket{0}+\frac{1}{\sqrt{2}}(-1)^{f(1)}\ket{1}\right)\ket{-}$ for each possible $f$ and then apply $H$ to see its effect. For instance, if $f(0)=f(1)$, then $\ket{\psi_3}$ reduces to $\ket{+}\ket{-}$ so that after applying $H$, you obtain $\ket{0}$._ Task 1 You are given an oracle function called `oracle()`, which returns randomly a quantum circuit with 2 qubits corresponding to an either constant or a balanced function $f$. This circuit represents the operator $U_f$ in our algorithm. Note that qubit 0 is the input and qubit 1 is the output qubit.Implement the proposed algorithm to decide whether or not your oracle function is constant or even. (Note: You should be able the see the circuit structure of $U_f$, if you draw your circuit. Can you check whether your result is correct or not by looking at this circuit?)Qiskit notes:- Run the following cell to load oracle function. `oracle()` returns a quantum circuit implementing $U_f$.- You can use `circuit.compose(oracle(), inplace=True)` to add the oracle to your whole circuit. (In general, you can define functions returning circuits and append them to your circuit by `compose` method.)- Barriers are not quantum programming primitives but they instruct qiskit to not apply any optimizations across the barrier and also useful for visualization. You may add them to your circuit using `circuit.barrier()`.
###Code
%run ../include/oracle.py
from qiskit import QuantumCircuit, execute, Aer
circuit = QuantumCircuit(2, 1)
# Your code here
# Apply X and H to qubit 1
#
# Apply H to qubit 0
# Apply oracle
#
# Apply H to qubit 1
#
# Measure the qubit 1
#
circuit.draw(output='mpl')
job = execute(circuit, Aer.get_backend('qasm_simulator'),shots=10000)
counts = job.result().get_counts()
print(counts)
###Output
_____no_output_____
###Markdown
click for our solution Task 2 There are four possible functions $f(x)$. Could you identify what these are? Write down the `oracle()` function which implements each. That is, you should construct a circuit implementing $U_f: \ket{x}\ket{y} \mapsto \ket{x}\ket{y \oplus f(x)} $. Note that qubit 0 is the input and qubit 1 is the output qubit. One of the functions is implemented for you to give you an idea.
###Code
import random
from qiskit import QuantumCircuit, execute, Aer
def oracle1():
circuit = QuantumCircuit(2)
#do something
return circuit
# f(0)=f(1)=1
def oracle2():
circuit = QuantumCircuit(2)
circuit.barrier()
circuit.x(1)
circuit.barrier()
return circuit
def oracle3():
circuit = QuantumCircuit(2)
#do something
return circuit
def oracle4():
circuit = QuantumCircuit(2)
#do something
return circuit
###Output
_____no_output_____ |
src/hcds-a2-bias.ipynb | ###Markdown
Anushna Prakash DATA 512 - Human-Centered Data Science October 14, 2021 A2 - Bias in Data The goal of this assignment is to explore the concept of bias through data on Wikipedia articles - specifically, articles on political figures from a variety of countries using a dataset of Wikipedia articles, country populations, and a machine learning service called ORES to estimate the quality of each article to show the quality of coverage of politicians on Wikipedia by country. This notebook will output: - the countries with the greatest and least coverage of politicians on Wikipedia compared to their population. - the countries with the highest and lowest proportion of high quality articles about politicians. - a ranking of geographic regions by articles-per-person and proportion of high quality articles. Step 0: Set Up Notebook
###Code
# Optional: Make notebook width wider
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:55% !important; }</style>"))
# Import libraries
import pandas as pd
import numpy as np
import json
import requests
import math
###Output
_____no_output_____
###Markdown
Step 1: Import dataSee the `README.md` for original data sources that were downloaded into the `data_raw` folder. Here I download Wikipedia articles about politicians and a dataset of the population of countries in the world and their regions.
###Code
# See README.md for where data was downloaded from originally.
# Import from data_raw folder assuming we are running from the src folder.
page_data = pd.read_csv('../data_raw/country/data/page_data.csv')
population = pd.read_csv('../data_raw/WPDS_2020_data.csv')
###Output
_____no_output_____
###Markdown
The Wikipedia page dataset has a page name, the country that the page is for, and a `rev_id` which uniquely identifies each page.
###Code
page_data.head()
page_data.info()
page_data.isna().sum()
###Output
_____no_output_____
###Markdown
The population data set has countries in the `Name` column in normal case, and the regions they belong in in uppercase. Although one value in the `FIPS` column is missing, we will not be requiring this column.
###Code
population.head()
population.info()
population.isna().sum()
population.loc[population['FIPS'].isna(),]
###Output
_____no_output_____
###Markdown
Step 2: Cleaning the Data Both `page_data.csv` and `WPDS_2020_data.csv` contain some rows that I will need to filter out and/or ignore when you combine the datasets in the next step. In the case of `page_data.csv`, the dataset contains some page names that start with the string "Template:". These pages are not Wikipedia articles, and will not be included in the analysis. Similarly, `WPDS_2020_data.csv` contains some rows that provide cumulative regional population counts, rather than country-level counts which have ALL CAPS values in the `Name` field (e.g. AFRICA, OCEANIA). I will first focus on keeping just the base country names to match up to the page data set, and later will use the regional hierarchy to analyze the page data by world region. There is still one sub-region that is technically not a country that will end up being included in the output, the Channel Islands, but since this sub-region has no countries underneath it, it is retained.
###Code
# Remove page names that begin with 'Template:' since these are not wikipedia articles
page_data = page_data.loc[~page_data['page'].str.startswith('Template:'), ]
page_data.head()
original_population = population.copy()
population = population.loc[~population['Name'].str.isupper(), ]
population.info()
population.loc[population['Type'] != 'Country',]
# Save just the list of countries assuming we are in the src folder
population.to_csv('../data_clean/WPDS_countries_only.csv', index = False)
###Output
_____no_output_____
###Markdown
Step 3: Getting Article Quality PredictionsNow I retrieve the predicted quality scores for each article in the Wikipedia dataset using the ORES API. See the `README.md` for more information about what this API is and how it is used. The API will output for each `rev_id` the article quality estimates, which are: - FA - Featured article - GA - Good article - B - B-class article - C - C-class article - Start - Start-class article - Stub - Stub-class article Note that there is also an `ores` package that can be pip installed, but I ran into significant issues trying to do this and so the API is being used in this case. Also, be aware that the API call can handle multiple `rev_ids` in one call, but only up to 50. For this reason, the dataset is divided into batches with each batch having a maximum of 50 articles in it. Each batch is sent to the API and the `json` calls are saved in the `data_raw/api_dump` folder so that if you are re-running this analysis, you don't have to call the API each time. The API call does take a few minutes to run so it can be time-consuming.
###Code
def api_call(endpoint,parameters):
call = requests.get(endpoint.format(**parameters), headers=headers)
response = call.json()
return response
# Fill with your own information if reproducing
headers = {
'User-Agent': 'https://github.com/anushnap',
'From': '[email protected]'
}
# Use the scores endpoint which allows multiple revids to be sent up at once. This endpoint is set up
# so that only the articlequality model is returned on English wikipedia.
endpoint = 'https://ores.wikimedia.org/v3/scores/enwiki?models=articlequality&revids={revid}'
# Make a copy of the original data frame so it is not mutated in place.
df = page_data.copy()
# Use 49 as a ceiling so that if there are remainders, the maximum it will go up to is 50 before increasing the number of batches
# to prevent the API call from failing.
n_batches = math.ceil(len(df) / 49)
df['row_num'] = np.arange(len(df))
df['batch_num'] = df['row_num'] % n_batches
###Output
_____no_output_____
###Markdown
Uncomment and run these cells if you are re-running and re-downloading the predictions from the API. Otherwise, skip this block and download the already-saved data from .json files in the `data_raw/api_dump` folder.
###Code
# for n in range(n_batches):
# id_str = '|'.join(df.loc[df['batch_num'] == n, 'rev_id'].astype(str))
# params = {
# "revid" : id_str
# }
# call = api_call(endpoint, params)
# filename = 'ores_scores_enwiki_articlequality_batchnum-' + str(n) + '.json'
# with open('../data_raw/api_dump/' + filename, 'w', encoding='utf-8') as f:
# json.dump(call, f, ensure_ascii = False, indent=4)
###Output
_____no_output_____
###Markdown
We go back through the `json` returned files and if a prediction exists, it will be returned and saved into the copied dataframe in a column called `prediction`. If it does not exist, we will return `np.nan`. Since there will be several hundred saved `json` files in the folder, this block will take ~2 minutes to run.
###Code
# Read the data back in and get the prediction if it exists
# If the prediction does not exist, there will be nothing in the json file after ['articlequality']['score'].
for n in range(n_batches):
filename = '../data_raw/api_dump/ores_scores_enwiki_articlequality_batchnum-' + str(n) + '.json'
temp = json.load(open(filename))['enwiki']['scores']
for i in temp:
int_id = int(i)
# print(int_id)
try:
prediction = temp[i]['articlequality']['score']['prediction']
except KeyError:
prediction = np.nan
finally:
df.loc[(df['rev_id'] == int_id), 'prediction'] = prediction
df.isna().sum()
###Output
_____no_output_____
###Markdown
Get a list of the articles for which we were unable to get a prediction and save this in `data_clean`. These are the articles that did not return a prediction.
###Code
# Save articles for which no prediction was returned assuming we are running from the src folder
df.loc[df['prediction'].isna()].to_csv('../data_clean/articles_missing_prediction.csv', index = False)
###Output
_____no_output_____
###Markdown
Part 4: Combining the DatasetsHere we will merge the Wikipedia page data with the article prediction scores into the population data set. Both have fields containing country names which will be merged on.Countries that have a population in the population dataset but no articles are removed, as are articles from countries that have no population in the population data set and saved separately into a CSV file called: `wp_wpds_countries-no_match.csv`. The remaining data is consolidated into a single CSV file called: `wp_wpds_politicians_by_country.csv`. The schema for that file is below: | Column ||--------|| country || article_name || revision_id || article_quality_est. || population |Note: `revision_id` here is the same thing as `rev_id`.
###Code
# Join all data together excluding data with missing predictions
full_results = df.merge(population, how = 'outer', left_on = 'country', right_on = 'Name')
# Find data that is missing in either table and save assuming we are running from src folder
missing_results = full_results.loc[(full_results['country'].isnull() | full_results['Name'].isnull())]
missing_results.to_csv('../data_clean/wp_wpds_countries-no_match.csv', index = False)
print(missing_results['country'].unique())
print(missing_results['Name'].unique())
# Join data together, but only non-missing data and only data that have a prediction
# Rename the column names for clarity
results = df.loc[~df['prediction'].isna()].merge(population, how = 'inner', left_on = 'country', right_on = 'Name')\
[['country', 'page', 'rev_id', 'prediction', 'Population']]\
.rename(
{'page': 'article_name', 'rev_id': 'revision_id', 'prediction': 'article_quality_est.', 'Population': 'population'},
axis = 1)
# Write results to a table assuming we are in the src folder
results.to_csv('../data_clean/wp_wpds_politicians_by_country.csv', index = False)
###Output
_____no_output_____
###Markdown
Step 5: AnalysisWe calculate the proportion (as a percentage) of articles-per-population and high-quality articles for each country AND for each geographic region. "High quality" articles are ones that ORES predicted would be in either the "FA" (featured article) or "GA" (good article) class. Examples: - if a country has a population of 10,000 people, and you found 10 FA or GA class articles about politicians from that country, then the percentage of articles-per-population would be .1%. - if a country has 10 articles about politicians, and 2 of them are FA or GA class articles, then the percentage of high-quality articles would be 20%.
###Code
# High quality articles are ones that are classified as FA or GA
results['high_quality'] = results['article_quality_est.'].isin(['FA', 'GA'])
# Create a new data frame that has the aggregated data
country_stats = results.groupby(['country', 'population', 'high_quality'], dropna = False, as_index = False)\
.agg({'revision_id': 'count'})
# Create the new columns required for the analysis
# The total number of articles as total_ids
country_stats['total_ids'] = country_stats.groupby(['country', 'population'])['revision_id'].transform('sum')
# The percentage of articles that are high quality as percentage_of_articles
country_stats['percentage_of_articles'] = country_stats['revision_id'] / country_stats['total_ids']
# The number of high quality articles per population as percentage_of_pop
country_stats['percentage_of_pop'] = country_stats['revision_id'] / country_stats['population']
# New data frame that contains the analysis and calculations. Rename the columns for clarity
articles_by_country = country_stats.loc[country_stats['high_quality'] == True].drop(['high_quality'], axis = 1)\
.rename({'revision_id': 'num_high_quality_articles', 'total_ids': 'total_articles'}, axis = 1)
articles_by_country
###Output
_____no_output_____
###Markdown
Get a mapping of each country to its sub-region and major region. For example, Northern Africa is a sub-region to which Algeria belongs, but Northern Africa is also in Africa which is its own major region/continent. We want to retain both of these regional classifications for the final output. This only works because the data in the `population` data set is stored such that each subregion is in ALL CAPS and the countries listed directly below it belong to that subregion. **IF THE FILE CHANGES AND THIS NO LONGER HOLDS THAN THE BELOW CODE WILL BE USELESS**.
###Code
# Find the sub-regions indices which are indicated by being in ALL CAPS
original_population['is_sub'] = original_population['Name'].str.isupper() * original_population.index
sub_i = original_population['is_sub'].expanding(1).apply(lambda x: np.max(x))
# Create a new column that retains this lowest-level subregional category.
original_population['Sub-Region_0'] = original_population.loc[sub_i, 'Name'].values
original_population['Sub-Region_0'].unique()
# I have manually listed what the highest-level regions are below. This may need to be changed if there are geopolitical
# changes to the way that countries are categorized by these major regions and the population data set changes.
regions_1 = ['AFRICA', 'NORTHERN AMERICA', 'LATIN AMERICA AND THE CARIBBEAN', 'ASIA', 'EUROPE', 'OCEANIA']
original_population['is_region'] = original_population['Name'].isin(regions_1) * original_population.index
region_i = original_population['is_region'].expanding(1).apply(lambda x: np.max(x))
# Createe a new column that retains this highest-level subregional category.
original_population['Sub-Region_1'] = original_population.loc[region_i, 'Name'].values
region_map = original_population.loc[(original_population['is_sub'] == 0) & (original_population['Name'] != 'WORLD'), ['Name', 'Sub-Region_0', 'Sub-Region_1']]
region_map
###Output
_____no_output_____
###Markdown
The regional map is joined to the previous analysis that had aggregated the number of articles by country and population to do further analysis and raise the analysis up two levels into the subregions.
###Code
# Join the region map to the articles_by_country data frame to get the 2 levels of subregions each country belongs to.
# Rename columns for clarity.
articles_by_country = articles_by_country.merge(region_map, how = 'inner', left_on = 'country', right_on = 'Name').drop(['Name'], axis = 1)\
.merge(original_population[['Name', 'Population']], how = 'inner', left_on = 'Sub-Region_0', right_on = 'Name')\
.merge(original_population[['Name', 'Population']], how = 'inner', left_on = 'Sub-Region_1', right_on = 'Name')\
.drop(['Name_x', 'Name_y'], axis = 1)\
.rename({'Population_x': 'pop_Sub-Region_0', 'Population_y': 'pop_Sub-Region_1'}, axis = 1)
articles_by_country
# Uplevel analysis once into the lowest-level subregion
articles_by_subregion = articles_by_country.groupby(['Sub-Region_0', 'pop_Sub-Region_0'], as_index = False)\
.sum()[['Sub-Region_0', 'pop_Sub-Region_0', 'num_high_quality_articles', 'total_articles']]
# Recalculate the percentage of articles that are high quality
articles_by_subregion['percentage_of_articles'] = articles_by_subregion['num_high_quality_articles'] / articles_by_subregion['total_articles']
# Recalculate the number of high quality articles as percentage of the population
articles_by_subregion['percentage_of_pop'] = articles_by_subregion['num_high_quality_articles'] / articles_by_subregion['pop_Sub-Region_0']
articles_by_subregion
# Uplevel analysis in to the highest-level subregion and repeat analysis from above code block.
articles_by_region = articles_by_country.groupby(['Sub-Region_1', 'pop_Sub-Region_1'], as_index = False)\
.sum()[['Sub-Region_1', 'pop_Sub-Region_1', 'num_high_quality_articles', 'total_articles']]
articles_by_region['percentage_of_articles'] = articles_by_region['num_high_quality_articles'] / articles_by_region['total_articles']
articles_by_region['percentage_of_pop'] = articles_by_region['num_high_quality_articles'] / articles_by_region['pop_Sub-Region_1']
articles_by_region
###Output
_____no_output_____
###Markdown
Northern America and Oceania are currently the only sub-regions that do not have a higher level region, so they are duplicated in both data frames. They are unioned and removed below.
###Code
# Union the datasets together and remove duplicates
articles_by_all_regions = pd.concat(
[articles_by_region.rename({'Sub-Region_1': 'Region', 'pop_Sub-Region_1': 'population'}, axis = 1),
articles_by_subregion.rename({'Sub-Region_0': 'Region', 'pop_Sub-Region_0': 'population'}, axis = 1)])\
.drop_duplicates()
articles_by_all_regions
###Output
_____no_output_____
###Markdown
Step 6: ResultsThe below tables are shown as a part of the output of the analysis and saved into the `results` folder, and displayed in this notebook.- Top 10 countries by coverage: 10 highest-ranked countries in terms of number of high-quality politician articles as a proportion of country population `top_10_countries_by_coverage.csv` - Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of high-quality politician articles as a proportion of country population `bottom_10_countries_by_coverage.csv` - Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality `top_10_countries_by_quality.csv` - Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality `bottom_10_countries_by_quality.csv` The top/bottom 10 results are ranked in order of either decreasing (for the top 10) or increasing (for the bottom 10) metric. For example, the country with the highest-quality articles as a percentage of total articles will come first in the top 10 results table. The country with the lowest-quality articles as a percentage of total articles will come first in the bottom 10 results table. The schema for the tables output is below: | Column ||--------|| country || population || percentage_of_articles || percentage_of_population |- Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population `regions_by_coverage.csv` - Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality `regions_by_quality.csv` The results for each region are ranked in decreasing order of the metric in question. For example, the region with the overall highest proportio of high-quality articles will be listed first in the `regions_by_quality` table and corresponding .csv. The schema for these tables is below: | Column ||--------|| Region || population || percentage_of_articles || percentage_of_population |
###Code
# Top 10 countries by coverage
top10_country_coverage = articles_by_country[['country', 'population', 'percentage_of_articles', 'percentage_of_pop']]\
.sort_values(by = ['percentage_of_pop'], ascending = False)\
.head(10)
# Bottom 10 countries by coverage
bottom10_country_coverage = articles_by_country[['country', 'population', 'percentage_of_articles', 'percentage_of_pop']]\
.sort_values(by = ['percentage_of_pop'], ascending = True)\
.head(10)
# Top 10 countries by relative quality
top10_country_quality = articles_by_country[['country', 'population', 'percentage_of_articles', 'percentage_of_pop']]\
.sort_values(by = ['percentage_of_articles'], ascending = False)\
.head(10)
# Bottom 10 countries by relative quality
bottom10_country_quality = articles_by_country[['country', 'population', 'percentage_of_articles', 'percentage_of_pop']]\
.sort_values(by = ['percentage_of_articles'], ascending = True)\
.head(10)
# Save csvs assuming we are in the src folder
top10_country_coverage.to_csv('../results/top_10_countries_by_coverage.csv', index = False)
bottom10_country_coverage.to_csv('../results/bottom_10_countries_by_coverage.csv', index = False)
top10_country_quality.to_csv('../results/top_10_countries_by_quality.csv', index = False)
bottom10_country_quality.to_csv('../results/bottom_10_countries_by_quality.csv', index = False)
display(top10_country_coverage)
display(bottom10_country_coverage)
display(top10_country_quality)
display(bottom10_country_quality)
# Geographic regions ranked by coverage by pop
regions_by_coverage = articles_by_all_regions[['Region', 'population', 'percentage_of_articles', 'percentage_of_pop']]\
.sort_values(by = ['percentage_of_pop'], ascending = False)
# Geographic regions ranked by coverage by article quality
regions_by_quality = articles_by_all_regions[['Region', 'population', 'percentage_of_articles', 'percentage_of_pop']]\
.sort_values(by = ['percentage_of_articles'], ascending = False)
# Save csvs assuming we are in the src folder
regions_by_coverage.to_csv('../results/regions_by_coverage.csv', index = False)
regions_by_quality.to_csv('../results/regions_by_quality.csv', index = False)
display(regions_by_coverage)
display(regions_by_quality)
###Output
_____no_output_____
###Markdown
A2: Bias in DataThe goal of this assignment is to explore the concept of bias through data on Wikipedia articles - specifically, articles on political figures from a variety of countries. We combine a dataset of Wikipedia articles with a dataset of country populations, and use a machine learning service called ORES to estimate the quality of each article.We then perform an analysis of how the coverage of politicians on Wikipedia and the quality of articles about politicians varies between countries. The analysis will consist of a series of tables that show:1. the countries with the greatest and least coverage of politicians on Wikipedia compared to their population.2. the countries with the highest and lowest proportion of high quality articles about politicians.3. a ranking of geographic regions by articles-per-person and proportion of high quality articles. Import Libraries
###Code
import os
import requests
from urllib.parse import urlencode
import pandas as pd
import numpy as np
from pprint import pprint as pp
from tqdm import tqdm
###Output
_____no_output_____
###Markdown
Define Constants
###Code
RAW_DATA_PATH = '../data/raw'
PROCESSED_DATA_PATH = '../data/processed'
ERROR_DATA_PATH = '../data/errors'
for path in [RAW_DATA_PATH, PROCESSED_DATA_PATH, ERROR_DATA_PATH]:
if not os.path.exists(path):
os.makedirs(path)
# Raw Data
RAW_COUNTRY_DATASET_FPATH = os.path.join(RAW_DATA_PATH, 'page_data.csv')
RAW_WORLD_POPULATION_DATASET_FPATH = os.path.join(RAW_DATA_PATH, 'WPDS_2020_data.csv')
#Processed prediction data
PROCESSED_POLITICIANS_DATASET_FPATH = os.path.join(PROCESSED_DATA_PATH, 'politicians_country.csv')
PROCESSED_WORLD_POPULATION_COUNTRY_LEVEL_DATASET_FPATH = os.path.join(PROCESSED_DATA_PATH, 'world_population_country_level.csv')
PROCESSED_WORLD_POPULATION_REGION_LEVEL_DATASET_FPATH = os.path.join(PROCESSED_DATA_PATH, 'world_population_region_level.csv')
PROCESSED_MISSING_PREDICTION_DATA_FPATH = os.path.join(ERROR_DATA_PATH, 'missing_prediction_revids.csv')
# Processed merged data
PROCESSED_POLITICIANS_WORLD_POPULATION_MERGED_FPATH = os.path.join(PROCESSED_DATA_PATH,'wp_wpds_politicians_by_country.csv')
PROCESSED_POLITICIANS_WORLD_POPULATION_NO_MATCH_FPATH = os.path.join(ERROR_DATA_PATH,'wp_wpds_countries-no_match.csv')
###Output
_____no_output_____
###Markdown
1. Data AcquisitionWe obtain the data from several different places:1. The Wikipedia politicians by country dataset can be found on [Figshare](https://figshare.com/articles/Untitled_Item/5513449) * We first download the zipped folder manually * We then extracted the zipped folder * Inside the folder we go to: `country/country/data` * Here, we copy `page_data.csv` and place this inside the raw data path2. The population data is available in CSV format as [WPDS_2020_data.csv](https://docs.google.com/spreadsheets/d/1CFJO2zna2No5KqNm9rPK5PCACoXKzb-nycJFhV689Iw/edit?usp=sharing) * This dataset is drawn from the world population data sheet published by the [Population Reference Bureau](https://www.prb.org/international/indicator/population/table/).
###Code
df_pcd = pd.read_csv(RAW_COUNTRY_DATASET_FPATH)
df_wpd = pd.read_csv(RAW_WORLD_POPULATION_DATASET_FPATH)
df_pcd.head(5)
df_wpd.head(5)
###Output
_____no_output_____
###Markdown
Data CleaningThere is some information that is not needed for analysis in each of the files mentioned above. Thus, we performing the following cleaning steps:1. Country Dataset * The dataset contains some page names that start with the string "Template:". * These pages are not Wikipedia articles, and should not be included in your analysis.2. Population Dataset * This dataset contains some rows that provide cumulative regional population counts, rather than country-level counts. * These rows are distinguished by having ALL CAPS values in the 'geography' field (e.g. AFRICA, OCEANIA). * We remove these from the dataset, but retain a copy of these in a seperate file
###Code
df_pcd = df_pcd[df_pcd["page"].str.contains("Template:")==False]
# According to assignment requirements
df_wpd_country = df_wpd[df_wpd['Name'].str.isupper() == False] # Country-level counts
df_wpd_region = df_wpd[df_wpd['Name'].str.isupper()] # Cumulative region level counts
# Better way of doing it ... but assignment requirements!
# df_wpd_country = df_wpd[df_wpd['Type'].str.contains("Country") == True] # Country-level counts
# df_wpd_region = df_wpd[df_wpd['Type'].str.contains("Sub-Region") == True] # Cumulative region level counts
# Ensure that we do not have anything that is all-caps in the `Name` field
df_wpd_country['Name'].unique()
# Ensure that we only have strings that are all-caps in the `Name` field
df_wpd_region['Name'].unique()
###Output
_____no_output_____
###Markdown
We can now cache away the files that were created
###Code
df_pcd.to_csv(PROCESSED_POLITICIANS_DATASET_FPATH, index=False)
df_wpd_country.to_csv(PROCESSED_WORLD_POPULATION_COUNTRY_LEVEL_DATASET_FPATH, index=False)
df_wpd_region.to_csv(PROCESSED_WORLD_POPULATION_REGION_LEVEL_DATASET_FPATH, index=False)
###Output
_____no_output_____
###Markdown
2. Obtain Article Quality PredictionsNow we need to get the predicted quality scores for each article in the Wikipedia dataset. We're using a machine learning system called ORES. This was originally an acronym for "Objective Revision Evaluation Service" but was simply renamed “ORES”. ORES is a machine learning tool that can provide estimates of Wikipedia article quality. The article quality estimates are, from best to worst:* FA - Featured article* GA - Good article* B - B-class article* C - C-class article* Start - Start-class article* Stub - Stub-class articleThese were learned based on articles in Wikipedia that were peer-reviewed using the Wikipedia content assessment procedures.These quality classes are a sub-set of quality assessment categories developed by Wikipedia editors. We use a [REST API](https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context) to obtain the information for each article.
###Code
ORES_ENDPOINT = 'https://ores.wikimedia.org/v3/scores/{context}?'
CONTEXT = 'enwiki'
MODEL = 'articlequality'
NUM_REVIDS_PER_BATCH = 50 # We will obtain predictions for these many articles at a time
df_pcd[MODEL] = np.NaN
df_pcd.set_index('rev_id', inplace=True)
def api_call(endpoint, parameters):
call = requests.get(endpoint.format(**parameters))
response = call.json()
return response
###Output
_____no_output_____
###Markdown
We want to call the API for multiple `rev_id`'s at a time. To achieve this, we create a list of lists, where each small list will contain a batch of rev_ids to be called at a time.
###Code
revids = df_pcd.index.to_list()
num_lists = round(len(revids) / NUM_REVIDS_PER_BATCH)
revids = list(map(list, np.array_split(revids, num_lists)))
df_pcd.head()
###Output
_____no_output_____
###Markdown
This section is responsible for calling the API as well as error handling. The following is the procedure for each batch of `rev_ids`1. Populate the parameter and query part in the API endpoint2. Call the API and obtain a JSON response3. Check to see if the API call was sucessful 4. Check to see if the each `rev_id` returned a valid response * If the response is valid, we go ahead and save this prediction information into a dataframe * If not, we add this `rev_id` to a list to show the errored out `rev_ids` later
###Code
error_batch_list = []
missing_revids = []
for revid_batch in tqdm(revids):
# Define the parameters that we ill be sending to the API
params = {
'context': CONTEXT,
}
query_parms = {
'revids': '|'.join(str(x) for x in revid_batch),
'models': MODEL
}
# Call the API and get the prediction from API
response = api_call(ORES_ENDPOINT + urlencode(query_parms), params)
# For each rev id, we populate it with the correct prediction
try:
scores = response[CONTEXT]['scores']
except:
error_batch_list.append(revid_batch)
continue
for rev_id in scores.keys():
if 'error' in rev_id: continue
try:
prediction = scores[rev_id][MODEL]['score']['prediction']
except:
missing_revids.append(rev_id)
# print(scores[rev_id][MODEL])
continue
rev_id = int(rev_id)
df_pcd.loc[rev_id, MODEL] = prediction
###Output
100%|██████████| 934/934 [06:42<00:00, 2.32it/s]
###Markdown
If a batch errored out, we iterate through each `rev_id` and call the API individually to see if that may return a valid response. This may sometimes happen because of constraints of the API
###Code
for revid_batch in error_batch_list:
for revid in revid_batch:
# This is the unique id for the article
revid_str = str(revid)
# Define the parameters that we ill be sending to the API
params = {
'context': CONTEXT,
'revid': revid_str,
'model': MODEL
}
response = api_call(ORES_ENDPOINT, params)
if 'scores' not in response[CONTEXT]: continue
scores = response[CONTEXT]['scores'][revid_str][MODEL]
if 'error' in scores.keys():
missing_revids.append(revid) # If we do not find the article, we skip it and move on
rev_id = int(revid)
df_pcd.loc[rev_id, MODEL] = scores['score']['prediction']
###Output
_____no_output_____
###Markdown
2.1 Prediction Errors
###Code
missing_revids = set(missing_revids)
print(f'There are {len(missing_revids)} rev_ids for which the API did not return a prediction. A list of rev ids can be found in the following file: \n{PROCESSED_MISSING_PREDICTION_DATA_FPATH}')
missing_revids_df = pd.DataFrame(list(missing_revids))
missing_revids_df.columns = ['rev_id']
missing_revids_df.to_csv(PROCESSED_MISSING_PREDICTION_DATA_FPATH, index=False)
###Output
There are 274 rev_ids for which the API did not return a prediction. A list of rev ids can be found in the following file:
../data/errors\missing_prediction_revids.csv
###Markdown
3. Combining DatasetsSome processing of the data will be necessary! In particular, you'll need to - after retrieving and including the ORES data for each article - merge the wikipedia data and population data together. Both have fields containing country names for just that purpose. After merging the data, you'll invariably run into entries which cannot be merged. Either the population dataset does not have an entry for the equivalent Wikipedia country, or vise versa.
###Code
PROCESSED_POLITICIANS_DATASET_PREDICTIONS_FPATH = os.path.join(PROCESSED_DATA_PATH, 'politicians_country_predicted.csv')
df_pcd.to_csv(PROCESSED_POLITICIANS_DATASET_PREDICTIONS_FPATH)
# df_pcd = pd.read_csv(PROCESSED_POLITICIANS_DATASET_PREDICTIONS_FPATH).set_index('rev_id')
# Clean and merge our 2 dataframes
df_pcd.dropna(subset = [MODEL], inplace=True)
df_pcd.reset_index(inplace=True)
# Merge the 2 dataframes
wp_wpds_politicians_by_country_df = pd.merge(left=df_pcd, right=df_wpd_country, left_on='country', right_on='Name')
# Clean and rename the merged dataframe
wp_wpds_politicians_by_country_df.rename(columns={'page': 'article_name',
'rev_id': 'revision_id',
MODEL: 'article_quality_est.',
'Population': 'population'}, inplace=True)
wp_wpds_politicians_by_country_df = wp_wpds_politicians_by_country_df[['country', 'article_name', 'revision_id', 'article_quality_est.', 'population']]
###Output
_____no_output_____
###Markdown
Check to find out what countries were not merged because there was no match
###Code
merged_countries = set(wp_wpds_politicians_by_country_df['country'])
all_pcd = set(df_pcd['country'])
all_wpd_country = set(df_wpd_country['Name'])
un_merged_countries_set = all_pcd.difference(merged_countries).union(all_wpd_country.difference(merged_countries))
un_merged_countries_df = pd.DataFrame(list(un_merged_countries_set))
un_merged_countries_df.columns = ['country']
wp_wpds_politicians_by_country_df.to_csv(PROCESSED_POLITICIANS_WORLD_POPULATION_MERGED_FPATH, index=False)
un_merged_countries_df.to_csv(PROCESSED_POLITICIANS_WORLD_POPULATION_NO_MATCH_FPATH, index=False)
wp_wpds_politicians_by_country_df.head(5)
###Output
_____no_output_____
###Markdown
4. AnalysisHere we transform the data so that it can be easily consumed for the results section. We create a pivot table to show the number and types of articles for each country. Moreover, we also add relevant information for each country
###Code
df_analysis = pd.pivot_table(wp_wpds_politicians_by_country_df,
fill_value=0,
columns=['article_quality_est.'],
aggfunc={
'article_quality_est.': len, #count the number of articles
},
index=['country'] #per country
)
df_analysis.columns = df_analysis.columns.droplevel() #clean up multilevel index
df_analysis = df_analysis.reset_index()
df_analysis.columns.name = None
# Add population to the pivot table
df_analysis = pd.merge(left=df_analysis,
right=wp_wpds_politicians_by_country_df.groupby(['country'])['population'].mean(),
left_on='country',
right_index=True)
df_analysis['num_articles'] = df_analysis['FA'] + df_analysis['GA'] + df_analysis['B'] + df_analysis['C'] + df_analysis['Stub'] + df_analysis['Start']
df_analysis['num_high_quality_articles'] = df_analysis['FA'] + df_analysis['GA']
#df_analysis['articles_per_population_percent'] = (df_analysis['num_high_quality_articles'] / df_analysis['population']) * 100
df_analysis['articles_per_population_percent'] = (df_analysis['num_articles'] / df_analysis['population']) * 100
df_analysis['high_quality_articles_percent'] = (df_analysis['num_high_quality_articles'] / df_analysis['num_articles']) * 100
###Output
_____no_output_____
###Markdown
After the transformation, our table now looks as follows:
###Code
df_analysis.head(5)
###Output
_____no_output_____
###Markdown
We now create a similar pivot table on a per region level. The process is as follows:1. Added a new column to determine which region each country belongs to2. Merge the data for world population (per region) and politician articles3. Rename and clean up the new dataframe
###Code
region = "NORTHERN AFRICA"
regions = ['WORLD', 'AFRICA', 'NORTHERN AFRICA']
for i in range(3, len(df_wpd)):
if df_wpd.iloc[i]['Type'] == 'Sub-Region':
region = df_wpd.iloc[i]['Name']
regions.append(region)
df_wpd['Region'] = regions
# Merge the per country population and articles by country datatset
wp_wpds_politicians_by_region_df = pd.merge(left=wp_wpds_politicians_by_country_df,
right=df_wpd,
left_on='country',
right_on='Name',
how='left')
# Add per region population information to the above dataframe
wp_wpds_politicians_by_region_df = pd.merge(left=wp_wpds_politicians_by_region_df,
right=df_wpd_region,
left_on='Region',
right_on='Name',
how='left')
wp_wpds_politicians_by_region_df = wp_wpds_politicians_by_region_df[['Region', 'country', 'article_name', 'revision_id', 'article_quality_est.', 'Population_y', 'population']]
wp_wpds_politicians_by_region_df.rename(columns={'Region': 'region',
'Population_y': 'region_population',
'population': 'country_population'}, inplace=True)
wp_wpds_politicians_by_region_df.dropna(subset=['region_population'], inplace=True)
df_analysis_region = pd.pivot_table(wp_wpds_politicians_by_region_df,
fill_value=0,
columns=['article_quality_est.'],
aggfunc={
'article_quality_est.': len, #count the number of articles
},
index=['region'] #per region
)
df_analysis_region.columns = df_analysis_region.columns.droplevel() #clean up multilevel index
df_analysis_region = df_analysis_region.reset_index()
df_analysis_region.columns.name = None
# Add population to the pivot table
df_analysis_region = pd.merge(left=df_analysis_region,
right=wp_wpds_politicians_by_region_df.groupby(['region'])['region_population'].mean(),
left_on='region',
right_index=True)
df_analysis_region['num_articles'] = df_analysis_region['FA'] + df_analysis_region['GA'] + df_analysis_region['B'] + df_analysis_region['C'] + df_analysis_region['Stub'] + df_analysis_region['Start']
df_analysis_region['num_high_quality_articles'] = df_analysis_region['FA'] + df_analysis_region['GA']
# df_analysis_region['articles_per_population_percent'] = (df_analysis_region['num_high_quality_articles'] / df_analysis_region['region_population']) * 100
df_analysis_region['articles_per_population_percent'] = (df_analysis_region['num_articles'] / df_analysis_region['region_population']) * 100
df_analysis_region['high_quality_articles_percent'] = (df_analysis_region['num_high_quality_articles'] / df_analysis_region['num_articles']) * 100
###Output
_____no_output_____
###Markdown
After the transformation, our table now looks as follows:
###Code
df_analysis_region.head(5)
###Output
_____no_output_____
###Markdown
5. Results
###Code
def show_results(df, _by, _ascending=True, n=10, pre_cols=['country']):
cols = pre_cols + _by
return df.sort_values(by=_by, ascending=_ascending).head(n)[cols]
###Output
_____no_output_____
###Markdown
5.1Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
show_results(df_analysis, _by=['articles_per_population_percent'], _ascending=False)
###Output
_____no_output_____
###Markdown
5.2Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
show_results(df_analysis, _by=['articles_per_population_percent'], _ascending=True)
###Output
_____no_output_____
###Markdown
5.3 Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
show_results(df_analysis, _by=['high_quality_articles_percent'], _ascending=False)
###Output
_____no_output_____
###Markdown
5.4Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
show_results(df_analysis, _by=['high_quality_articles_percent'], _ascending=True)
###Output
_____no_output_____
###Markdown
5.5Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
show_results(df_analysis_region, _by=['articles_per_population_percent'], _ascending=False, pre_cols=['region'], n=len(df_analysis_region))
###Output
_____no_output_____
###Markdown
5.6Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
show_results(df_analysis_region, _by=['high_quality_articles_percent'], _ascending=False, pre_cols=['region'], n=len(df_analysis_region))
###Output
_____no_output_____
###Markdown
Wikipedia politician pages, and bias in data University of Washington, DATA 512 Autumn 2019, Assignment 2 Bianca ZlavogIn this assignment, we analyze the number and quality of English Wikipedia politician pages relative to country and region population. The two main data sources we use are the Politicians by Country from the English-language Wikipedia dataset and the 2018 World Population Data from the Population Reference Bureau. We process this data, then use the Wikipedia ORES API to estimate the quality of each Wikipedia politician article. Finally, we merge all the data and analyze the trends in relative coverage and quality of articles by country and region, then discuss our findings along with potential sources of bias in the data.First, we will import all the needed packages.
###Code
import csv
import urllib.request
from urllib.request import urlopen
import codecs
import pandas as pd
from io import BytesIO
from zipfile import ZipFile
import oresapi
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Data AcquisitionIn this first step, we download and clean all the necessary input data.The first dataset we read in is the [Politicians by Country from the English-language Wikipedia](https://figshare.com/articles/Untitled_Item/5513449) dataset. This file contains over 400,000 observations, and the variables: `page`: Wikipedia page title, containing the name of a politician `country`: Name of the country that the politician worked in `rev_id`: edit ID of the last edit to the page We save out a copy of the raw data, then clean it by removing any entries starting with the string "Template:", since these entries are not Wikipedia articles.
###Code
# Read in Politicians by Country dataset from website, and parse the resulting csv file
resp = urlopen("https://ndownloader.figshare.com/files/9614893")
zipfile = ZipFile(BytesIO(resp.read()))
file = zipfile.open("country/data/page_data.csv")
csvfile = csv.reader(codecs.iterdecode(file, 'utf-8'))
header = next(csvfile)
page_data = pd.DataFrame(csvfile)
page_data.columns = header
# Save out raw data
page_data.to_csv('../data_raw/page_data.csv', index = False)
# Drop pages starting with "Template:", which are not Wikipedia articles
page_data = page_data[page_data['page'].str.startswith('Template:') != True]
page_data = page_data.reset_index(drop = True)
page_data.head()
###Output
_____no_output_____
###Markdown
Next, we read in the Population Reference Bureau's 2018 World Population Data Sheet. This dataset contains two variables: `Geography`: Country or region measured `Population mid-2018 (millions)`: Population of the respective location in mid-2018, in millions of people We downloaded this dataset manually from the [Canvas course site](https://canvas.uw.edu/files/58607571/download?download_frd=1) to the `data_raw` directory. We then remove any entries of locations given in all capital letters, because these are regions rather than countries.Note that there is an updated set of population estimates is available from the [Population Reference Bureau's 2019 World Population Data Sheet](https://www.prb.org/international/indicator/population/table/), which was not used in this assignment. I have included some unused code in the commented out section below that will instead read this dataset from the website, keep only the geographical information and population variables, then save out the formatted raw data.
###Code
# Read in 2018 World Population data
populations = pd.read_csv('../data_raw/WPDS_2018_data.csv')
populations.columns = ['country', 'Population mid-2018 (millions)']
## Alternately, read in 2019 World Population data
# ftpstream = urllib.request.urlopen("https://datacenter.prb.org/download/international/indicator/population/csv")
# csvfile = csv.reader(codecs.iterdecode(ftpstream, 'utf-8'))
# populations = pd.DataFrame(csvfile)
#
## Keep only needed rows and columns
# populations = populations.drop([0, 1, 2, 3, 4])
# populations = populations.drop(populations.columns[[0, 2, 3]], axis = 1)
#
## Rename columns
# populations.columns = ['Geography', 'Population mid-2019 (millions)']
#
# populations.to_csv('../data_raw/WPDS_2019_data.csv', index = False)
# Remove uppercased rows that contain region-level data
populations_country = populations[populations.country != populations.country.str.upper()]
populations_country = populations_country.reset_index(drop = True)
populations_country.head()
###Output
_____no_output_____
###Markdown
Part 2: Data ProcessingIn this section, we process the population dataset to map each country to its respective region. Then, we use the ORES API to obtain article quality data, and finally merge all our datasets together in preparation for analysis.First, let's create a dataset of countries together with their corresponding region.
###Code
# First, extract just the regions
populations_region = populations[populations.country == populations.country.str.upper()]
regions_list = []
regs = populations_region['country'].tolist()
# Extract the indices of the regions dataset, and subtract them to get the number of times each region should be repeated
indices = populations_region.index
last_element = indices[-1]
indices = [(indices[i - 1] - indices[i]) * -1 for i in range(1, len(indices))]
indices.append(len(populations.index) + 1 - last_element)
# Then loop over the regions, and append each to a list a number of times equal to the number of rows belonging to that region in the original populations dataset
for i in range(len(regs)):
j = 1
while j <= indices[i]:
regions_list.append(regs[i])
j += 1
# Finally, convert the regions list to a DataFrame, and merge the region data on to the country populations data
regions_df = pd.DataFrame.from_dict(regions_list)
regions_df.columns = ["region"]
populations = populations.merge(regions_df, left_index = True, right_index = True)
###Output
_____no_output_____
###Markdown
Next, we query the [ORES client](https://github.com/wikimedia/ores), from Wikimedia Foundation and authors Aaron Halfaker, Yuvi Panda, Amir Sarabadani, Justin Du, Adam Wight, available under an MIT License. Documentation pages for the API are available [here](https://www.mediawiki.org/wiki/ORES). We obtain predictions of article quality for each Wikipedia politician page. Note that there are six possible predicted values of article quality: FA (Featured article), GA (Good article), B (B-class article), C (C-class article), Start (Start-class article), Stub (Stub-class article). For the purposes of this assignment, we only consider FA and GA as corresponding to high-quality articles.Note that I was not able to install the `ores` package due to package compatibility errors, but was able to instead use the [`oresapi` package](https://github.com/halfak/oresapi), available from Aaron Halfaker, 2019, under an MIT License.
###Code
# Start an ORES API session
# Provide this useragent argument for the class to help the ORES team track requests
ores_session = oresapi.Session("https://ores.wikimedia.org", "Class project <[email protected]>")
# Obtain predictions of article quality for each revision ID in the politicians dataset
results = ores_session.score("enwiki", ["articlequality"], page_data['rev_id'])
gen_lst = list(results)
# Convert outputs to a dataframe
results2 = pd.DataFrame(columns = ['articlequality'])
for i in gen_lst:
results2 = results2.append(i, ignore_index = True)
# Extract just the predicted article quality
results2['articlequality'] = results2['articlequality'].astype(str).str.slice(start = 26).str.rsplit("'", expand = True)
results2.head()
###Output
_____no_output_____
###Markdown
Finally, merge all the datasets together, and save out a cleaned dataset for analysis.
###Code
# First merge article quality predictions onto politicians pages dataset
all_data = page_data.merge(results2, left_index = True, right_index = True)
# Remove and save out entries for which the ORES API query did not return article quality scores
all_data_noscores = all_data[~all_data['articlequality'].isin(['B', 'C', 'FA', 'GA', 'Stub', 'Start'])]
all_data_noscores.to_csv('../data_clean/wp_wpds_politicians_noscores.csv', index = False)
all_data = all_data[all_data['articlequality'].isin(['B', 'C', 'FA', 'GA', 'Stub', 'Start'])]
# Now merge on country population data
all_data = all_data.merge(populations_country, how = 'outer', indicator = True)
# Output a dataset containing the data that failed to merge - either no country population data, or no politician data
all_data_nomerge = all_data[all_data['_merge'].isin(['left_only', 'right_only'])]
all_data_nomerge.to_csv('../data_clean/wp_wpds_countries-no_match.csv', index = False)
# Save out the final cleaned dataset with all the entries that merged
all_data_fin = all_data[all_data['_merge'] == "both"]
all_data_fin = all_data_fin.rename(columns = {"page": "article_name", "rev_id": "revision_id", "articlequality":
"article_quality", "Population mid-2018 (millions)": "population"})
all_data_fin = all_data_fin[["country", "article_name", "revision_id", "article_quality", "population"]]
all_data_fin.to_csv('../data_clean/wp_wpds_politicians_by_country.csv', index = False)
# Merge on region data
all_data_fin = all_data_fin.merge(populations)
# Merge on region populations
populations_region.columns = ['region', 'region_population']
all_data_fin = all_data_fin.merge(populations_region)
# Create indicator for whether the article quality is considered high-quality (GA or FA)
all_data_fin['high_quality'] = np.where((all_data_fin['article_quality'] == "GA") |
(all_data_fin['article_quality'] == "FA"), 1, 0)
# Convert population data type from string to integer
all_data_fin['population'] = pd.to_numeric(all_data_fin['population'].str.replace(',', ''))
all_data_fin['region_population'] = all_data_fin['region_population'].str.replace(',', '').astype(int)
all_data_fin.head()
###Output
_____no_output_____
###Markdown
Step 3: AnalysisIn this section, we create six tables comparing the relative quantity and quality of Wikipedia polititcal pages relative to population across countries and regions. Finally, we conclude with a writeup of the findings and potential sources of bias in the data.
###Code
# TABLE 1: Top 10 countries by coverage
# "10 highest-ranked countries in terms of number of politician articles as a proportion of country population"
articles_country = all_data_fin.groupby(['country', 'population'], as_index = False).count()
articles_country['proportion'] = articles_country['article_name'] / articles_country['population']
articles_country = articles_country.sort_values('proportion', ascending = False)
articles_country2 = articles_country[['country', 'proportion']]
articles_country2.head(10)
# TABLE 2: Bottom 10 countries by coverage
# "10 lowest-ranked countries in terms of number of politician articles as a proportion of country population"
articles_country2 = articles_country2.sort_values('proportion')
articles_country2.head(10)
# TABLE 3: Top 10 countries by relative quality
# "10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality"
articles_qual = all_data_fin.groupby(['country'], as_index = False).sum()
articles_qual = articles_qual.merge(articles_country[['country', 'article_name']])
articles_qual['proportion'] = articles_qual['high_quality'] / articles_qual['article_name']
articles_qual = articles_qual.sort_values('proportion', ascending = False)
articles_qual = articles_qual[['country', 'proportion']]
articles_qual.head(10)
# TABLE 4: Bottom 10 countries by relative quality
# "10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality"
articles_qual = articles_qual.sort_values('proportion')
articles_qual.head(10)
# TABLE 5: Geographic regions by coverage
# "Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population"
articles_region = all_data_fin.groupby(['region', 'region_population'], as_index = False).count()
articles_region['proportion'] = articles_region['article_name'] / articles_region['region_population']
articles_region = articles_region.sort_values('proportion', ascending = False)
articles_region2 = articles_region[['region', 'proportion']]
articles_region2
# TABLE 6: Geographic regions by coverage
# "Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality"
region_qual = all_data_fin.groupby(['region'], as_index = False).sum()
region_qual = region_qual.merge(articles_region[['region', 'article_name']])
region_qual['proportion'] = region_qual['high_quality'] / region_qual['article_name']
region_qual = region_qual.sort_values('proportion', ascending = False)
region_qual = region_qual[['region', 'proportion']]
region_qual
###Output
_____no_output_____
###Markdown
A2: Bias in data Dane Jordan Import necessary libraries that will be used
###Code
import json
import matplotlib.pyplot as plt
import pandas as pd
import requests
%matplotlib inline
###Output
_____no_output_____
###Markdown
Getting the article and population dataThe wikipedia article dataset, "Politicians by Country from the English-language Wikipedia," was obtained from Figshare on 10/29/2017. It was downloaded as a zipped folder and the `page_data.csv` was extracted from /country/data.https://figshare.com/articles/Untitled_Item/5513449- CC-BY 4.0The population dataset, "Population Mid-2015," was obtained from the Population Reference Bureau on 10/27/2017. The link for this data is NOT provided and it is NOT included in the repository as it is copyrighted.- Copyright © 2016, Population Reference Bureau. All rights reserved.The two datasets are read into pandas DataFrames below. The revid for the wikipedia article dataset is set to be a string for merging purposes later on. The population dataset excludes the first line which is a title for the dataset and removes all commas in numbers (population counts).
###Code
# read in the data from the page_data.csv file
page_data = pd.read_csv('../data_raw/page_data.csv', dtype={'rev_id': str})
# read in the data from the population csv file
population_data = pd.read_csv('../data_raw/Population Mid-2015.csv', header=1, thousands=',')
###Output
_____no_output_____
###Markdown
Getting article quality predictions using ORESData was gathered from the ORES (Objective Revision Evaluation Service) API, 2017. It was obtained on 10/29/2017. While no license was found on ORES, it has been attributed to the same license as the Wikimedia Foundation.https://wikimediafoundation.org/wiki/Terms_of_Use/en- CC-BY-SA 3.0Passing the rev_ids through the ORES API, could be done individually or in batches. Based on the number of calls to the API, it is much quicker to batch the revids, however, the API only accepts a certain number of revids per call. After some trial and error, the acceptable number of revids seemed to be anything less than 140. For safety the revids were batched in groups of 100. Below a list of lists is created where each nested list contains 100 revids.
###Code
# create a list of lists where each nested list contains 100 revids delimited by '|'
revid_list = []
for i in range(len(page_data)):
if i % 100 == 0:
revid_list.insert(i // 100, [str(page_data['rev_id'][i])])
else:
revid_list[i // 100].append(str(page_data['rev_id'][i]))
###Output
_____no_output_____
###Markdown
Next, two lists are created. One to store the article quality prediction and another to account for any missing revids. This was implemented after the wikipedia article dataset was updated and the ORES API call returned an error--debugging yielded the addition.The revid list created above is then looped such that the API is sent a request with a batch of 100 revids. After each API call, the response is looped and each revid is parsed to obtain the prediction. That prediction is appended, as well as the revid, to the predictions list. If no prediction is found and a KeyError is returned, an 'NA' along with the revid is appended to the predictions list and the revid alone is appended to the missing revids list.__NOTE: At the time the API was called and the cleaned dataset was created only two revids were missing. The ORES API call has been rerun since this time and is now showing more missing revids. The cleaned dataset is imported in a later step, and was created using the ORES API response from 10/29/2017.__
###Code
# initialize a list to store the prediction values
predictions = []
missing_revid = []
# loop through the the revid lists (batches of 100) and call the api
for i in range(len(revid_list)):
endpoint = 'https://ores.wikimedia.org/v3/scores/{context}?models={models}&revids={revids}'
params = {'context' : 'enwiki',
'models' : 'wp10',
'revids' : '|'.join(revid_list[i])
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
# loop through the response for the batch of 100 revids and append the prediction to the prediction list
for j in response['enwiki']['scores']:
# check for missing revids (potentially articles that have been deleted)
try:
predictions.append([j, response['enwiki']['scores'][j]['wp10']['score']['prediction']])
# if artile is missing, attribute an 'NA' prediction and store the missing revid
except KeyError:
predictions.append([j, 'NA'])
missing_revid.append(j)
# print the count and list of missing revids
print('There were a total of ' + str(len(missing_revid)) + ' missing revids. They are as follows: ' + str(sorted(missing_revid)))
###Output
There were a total of 4 missing revids. They are as follows: ['806811023', '807367030', '807367166', '807484325']
###Markdown
Combining the datasetsThe predictions obtained from ORES are loaded into a pandas DataFrame with column names added. Duplicate revids were originally dropped from this DataFrame prior to the wikipedia article dataset being updated. The predictions obtained from ORES and the page_data obtained from Figshare are then merged on the revid, essentially appending the article quality prediction to the page_data. This new page_data is then merged with the population dataset on the country/Location ('country' is the attribute name in page_data and 'Location' is the attribute name in population_data) to obtain a cleaned dataset that is in the below format:column name | value--- | ---country | strarticle_name | strrevision_id | strarticle_quality | strpopulation | str
###Code
# convert predictions to a dataframe and add column names
predictions_df = pd.DataFrame(predictions)
predictions_df.columns = ['revid', 'prediction']
# drop duplicates (this was included for the duplicate revids--no longer needed with the updated page_data.csv)
predictions_df = predictions_df.drop_duplicates()
# merge the predictions with the page_data
page_data_new = page_data.merge(predictions_df, how='left', left_on='rev_id', right_on='revid')
page_data_new = page_data_new.drop('revid', axis=1)
# merge the two data sets (page_data_new and population_data), add specified column names, and reorder columns
combined_data = page_data_new.merge(population_data[['Location', 'Data']], left_on='country', right_on='Location')
combined_data = combined_data.drop('Location', axis=1)
combined_data.columns = ['article_name', 'country', 'revision_id', 'article_quality', 'population']
combined_data = combined_data[['country', 'article_name', 'revision_id', 'article_quality', 'population']]
# output file to csv
# combined_data.to_csv('../data_clean/combined_data.csv', index=False)
###Output
_____no_output_____
###Markdown
AnalysisSo as to reproduce the analysis, the cleaned data is loaded from a saved static file. The analysis could potentially change depending on data sources from earlier, such as noted regarding the ORES API response.First all of the countries are identified and duplicates are removed. Then the number of articles for each country are counted using a `groupby` (__NOTE: This includes articles with missing revids from the API call__). Next, the high-quality articles are returned in a similar fashion, again listing the country and the count, with countries that have no high-quality articles having a '0' for the count. The population for each country is obtained by using a `groupby` and retrieving the `max` which is also the `min`, but since there is only one possible population value assigned to each country, this does not matter.Now that the data has been obtained in a per country format, the simple calculation is performed for articles per population and percentage of high-quality articles for each country and loaded into an analysis DataFrame with the following structure:column name | value--- | ---country | strarticles_per_population | intpercentage_hq_articles | int
###Code
# load the cleaned data
combined_data = pd.read_csv('../data_clean/combined_data.csv',
dtype={'revision_id': str},
encoding='ISO-8859-1')
# identify all of the countries (removing duplicates)
country = pd.DataFrame({'country': sorted(combined_data['country'].unique())})
# count the total number of articles (NOTE: this includes articles with missing revids from the api call) per country
num_articles = combined_data.groupby('country')['article_name'].count().reset_index()
num_articles.columns = ['country', 'count']
# count the number of 'high-quality' articles per country
hq_articles = combined_data[(combined_data['article_quality'] == 'FA') |
(combined_data['article_quality'] == 'GA')
].groupby('country')['article_quality'].count()
hq_articles = hq_articles.reindex(country['country'], fill_value=0).reset_index()
hq_articles.columns = ['country', 'count']
# get the population per country
population = combined_data.groupby('country')['population'].max().reset_index()
# calculate the two 'analysis' variables:
# 1. proportion (as a percentage) of articles-per-population for each country
# 2. proportion (as a percentage) of high-quality articles for each country
articles_per_population = pd.DataFrame({'country': country['country'],
'percentage': 100*(num_articles['count'] / population['population'])})
percentage_hq_articles = pd.DataFrame({'country': country['country'],
'percentage': 100*(hq_articles['count'] / num_articles['count'])})
# combine the two 'analysis' variables into a dataframe
analysis = pd.DataFrame({'country': country['country'],
'articles_per_population': articles_per_population['percentage'],
'percentage_hq_articles': percentage_hq_articles['percentage']
})
# reorder the columns in the dataframe
analysis = analysis[['country', 'articles_per_population', 'percentage_hq_articles']]
###Output
_____no_output_____
###Markdown
TablesThe following function converts a 2-column pandas DataFrame into a markdown style table for easy implementation on github or any other markdown environment.
###Code
def convert_table_markdown(df, filename):
'''
This function takes in a 2-column dataframe and converts it into a markdown table
:param df: pandas dataframe
:param filename: str saved file name
'''
df_table = df.columns[0] + ' | ' + df.columns[1] + ' (%)' + '\n---' + ' | ' + '---'
for i in range(len(df)):
df_table += '\n' + df[df.columns[0]][i] + ' | ' + '%f' % round(df[df.columns[1]][i], 6) + '%'
df_markdown = open('../analysis/' + filename + '.txt', 'w')
df_markdown.write(df_table)
df_markdown.close()
###Output
_____no_output_____
###Markdown
Here the four visualizations are created by sorting the analysis DataFrame, either ascending or descending depending on if it is for the highest-10 or lowest-10, and keeping only those first 10 records.__NOTE: Upon performing this operation it was seen that there were more than 10 countries with 0.00% high-quality articles. As such, a bar graph representation of these would not be useful and instead a list was created. Also, as all 10 showed 0.00%, the analysis was performed in such a way that it listed all countries with 0.00% high-quality article percentage so as not to give favor to particular countries.__
###Code
# sort the analysis dataframe based on the key attributes and only return the first 10
viz1 = analysis.sort_values('articles_per_population', ascending=False)[['country', 'articles_per_population']][0:10].reset_index(drop=True)
viz2 = analysis.sort_values('articles_per_population')[['country', 'articles_per_population']][0:10].reset_index(drop=True)
viz3 = analysis.sort_values('percentage_hq_articles', ascending=False)[['country', 'percentage_hq_articles']][0:10].reset_index(drop=True)
# this visualization will be handled only as a table, since there are more than 10 countries with no high-quality articles
viz4 = analysis[analysis['percentage_hq_articles'] == 0][['country', 'percentage_hq_articles']].sort_values('country').reset_index(drop=True)
# convert visualizaions to a markdown style tables (text files to be included in analysis README.md)
# convert_table_markdown(viz1, 'viz1')
# convert_table_markdown(viz2, 'viz2')
# convert_table_markdown(viz3, 'viz3')
# convert_table_markdown(viz4, 'viz4')
###Output
_____no_output_____
###Markdown
The bar plots were included as they has already been created prior to the update that they were not necessary for the purposes of the assignment.
###Code
# create bar plots for the first three visualizations and save the images
ax1 = viz1.plot(x='country', kind='bar', figsize=(16, 12), legend=False, fontsize=14)
plt.title('10 highest-ranked countries in terms of number of politician articles as a proportion of country population', fontsize=16)
plt.xlabel('country', fontsize=14)
plt.ylabel('articles per population (%)', fontsize=14)
plt.tight_layout()
# plt.savefig('../analysis/viz1.png')
ax2 = viz2.plot(x='country', kind='bar', figsize=(16, 12), legend=False, fontsize=14)
plt.title('10 lowest-ranked countries in terms of number of politician articles as a proportion of country population', fontsize=16)
plt.xlabel('country', fontsize=14)
plt.ylabel('articles per population (%)', fontsize=14)
plt.tight_layout()
# plt.savefig('../analysis/viz2.png')
ax3 = viz3.plot(x='country', kind='bar', figsize=(16, 12), legend=False, fontsize=14)
plt.title('10 highest-ranked countries in terms of number of high-quality (GA/FA) articles\n'
'as a proportion of all articles about politicians from that country', fontsize=16)
plt.xlabel('country', fontsize=14)
plt.ylabel('high-quality articles (%)', fontsize=14)
plt.tight_layout()
# plt.savefig('../analysis/viz3.png')
###Output
_____no_output_____
###Markdown
10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
viz1
###Output
_____no_output_____
###Markdown
10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
viz2
###Output
_____no_output_____
###Markdown
10 highest-ranked countries in terms of number of high-quality (GA/FA) articles as a proportion of all articles about politicians from that country
###Code
viz3
###Output
_____no_output_____
###Markdown
10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
viz4
###Output
_____no_output_____
###Markdown
A2 - Bias in DataThe goal of this assignment is to explore the concept of bias through data on Wikipedia articles - specifically, articles on political figures from a variety of countries. For this assignment, we will combine a dataset of Wikipedia articles with a dataset of country populations, and use a machine learning service called ORES to estimate the quality of each article.We will then perform an analysis of how the coverage of politicians on Wikipedia and the quality of articles about politicians varies between countries. The analysis will consist of a series of tables that show:1. The countries with the greatest and least coverage of politicians on Wikipedia compared to their population.2. The countries with the highest and lowest proportion of high quality articles about politicians.3. A ranking of geographic regions by articles-per-person and proportion of high quality articles. Step 1: Getting the Article and Population DataThe first step is getting the data. The Wikipedia [politicians by country dataset](https://figshare.com/articles/Untitled_Item/5513449) can be found on Figshare. Here it is called page_data.csv.The population data is available is called WPDS_2020_data.csv. Here it is called WPDS_df. This dataset is drawn from the [world population data sheet](https://www.prb.org/international/indicator/population/table/) published by the Population Reference Bureau.Our analysis will also use score estimates generated from ORES. You must `pip install ores` prior to running this notebook, or follow the [installation instructions](https://github.com/wikimedia/ores).
###Code
import pandas as pd
import numpy as np
from ores import api
from tqdm import tqdm
page_data_path = '../data/page_data.csv'
WPDS_path = '../data/WPDS_2020_data.csv'
###Output
_____no_output_____
###Markdown
Step 2: Cleaning the DataBoth page_df and WPDS_df contain some rows we will need to filter out or ignore. We will clean the datasets here.
###Code
# Filter out any rows that begin with 'Template:'. These are not Wikipedia articles.
page_df = pd.read_csv(page_data_path)
page_df = page_df.loc[~page_df['page'].str.contains('Template:')]
page_df
###Output
_____no_output_____
###Markdown
Here we add a column to the WPDS_df for 'Region' as well as 'region_population' so we have a way to associate each country with its region. Then we separate the regions and countries into separate dfs.
###Code
WPDS_df = pd.read_csv(WPDS_path)
WPDS_df
# Adding the sub-region and region_population to WPDS_df.
region = ('NORTHERN AFRICA', 244344000)
regions = [('WORLD', 7772850000) , ('AFRICA', 1337918000), ('NORTHERN AFRICA', 244344000)]
for i in range(3, len(WPDS_df)):
if WPDS_df.iloc[i]['Type'] == 'Sub-Region':
region = (WPDS_df.iloc[i]['Name'], WPDS_df.iloc[i]['Population'])
regions.append(region)
regions_tuples_df = pd.DataFrame(regions, columns=['Region', 'region_population'])
WPDS_df = pd.concat([WPDS_df, regions_tuples_df], axis=1)
WPDS_df
# Separate all UPPERCASE entries from lowercase ones. UPPERCASE names are regions and lowercase are countries.
regions_df = WPDS_df.loc[WPDS_df.Name.str.isupper() == True]
countries_df = WPDS_df.loc[WPDS_df.Name.str.isupper() == False]
regions_df.head(10)
countries_df
###Output
_____no_output_____
###Markdown
Step 3: Getting Article Quality Predictions Now we need to get the predicted quality scores for each article in the Wikipedia dataset. We're using a machine learning system called ORES. This was originally an acronym for "Objective Revision Evaluation Service" but was simply renamed “ORES”. ORES is a machine learning tool that can provide estimates of Wikipedia article quality. The article quality estimates are, from best to worst:1. FA - Featured article2. GA - Good article3. B - B-class article4. C - C-class article5. Start - Start-class article6. Stub - Stub-class articleThese were learned based on articles in Wikipedia that were peer-reviewed using the [Wikipedia content assessment procedures](https://en.wikipedia.org/wiki/Wikipedia:Content_assessment).These quality classes are a sub-set of quality assessment categories developed by Wikipedia editors. ORES will assign one of these 6 categories to any rev_id we send it.To get the score estimates, we will build a list of all the rev_ids in page_df and feed them one at a time to ORES. Each query will return a generator object which we will collect in a list called 'results'.
###Code
ores_session = api.Session("https://ores.wikimedia.org", "DATA 512 Class project <[email protected]>")
revids = list(page_df.rev_id)
results = []
for revid in revids:
results.append(ores_session.score("enwiki", ["articlequality"], [revid]))
# Empty list to population with (rev_id, score) tuples.
scores = []
###Output
_____no_output_____
###Markdown
Here is an example of one of the generator objects that is stored in 'results'. **We will only be concerned with the 'prediction' field.**
###Code
for score in results[0]:
print(score)
###Output
{'articlequality': {'score': {'prediction': 'Stub', 'probability': {'B': 0.005643168767502225, 'C': 0.005641424870624224, 'FA': 0.0010757577110297029, 'GA': 0.001543343686495854, 'Start': 0.010537503531047517, 'Stub': 0.9755588014333005}}}}
###Markdown
This cell populates the list `scores=[(rev_id, score)]` which stores a tuple for each rev_id, score pair. We will then convert the list `scores` to a dataframe and finally merge it with `page_df` using rev_id as the key so that each article has a score.
###Code
for i in tqdm(range(len(results))):
for score in results[i]:
if 'error' in list(score['articlequality'].keys()):
scores.append((revids[i], np.nan))
else:
scores.append((revids[i], score['articlequality']['score']['prediction']))
# Convert scores which is a list of tuples to a dataframe
scores_df = pd.DataFrame(scores, columns=['rev_id', 'score'])
scores_df
# scores_df.to_csv('../data/scores_df.csv')
scores_df = pd.read_csv('../data/scores_df.csv')
###Output
_____no_output_____
###Markdown
Here we merge the scores_df with the page_df on rev_id. We use a left merge to retain all the rows of page_df. Then we will separate all the articles that ORES was unable to determine a score (score='NaN') from the articles with valid scores.
###Code
page_df = page_df.merge(scores_df, how='left', left_on='rev_id', right_on='rev_id')
page_df
nan_scores_df = page_df[page_df['score'].isna()]
articles_df = page_df[~page_df['score'].isna()]
###Output
_____no_output_____
###Markdown
There are 277 articles for which ORES was unable to determine a score. We will export the dataframe containing those rows to a file called `nan_scores_df.csv`.
###Code
nan_scores_df.shape
nan_scores_df.to_csv('../data/nan_scores_df.csv')
###Output
_____no_output_____
###Markdown
Step 4: Combining the DatasetsWe need to merge the Wikipedia data and population data together. Both have fields containing country names which we will use for the merge. After merging the data, we will find that some entries could not be merged. Either the population dataset does not have an entry for the equivalent Wikipedia country, or vise versa.We will use an outer merge to retain all rows from both dataframes. Then we will remove any rows that are missing article, country, or score. We will save them to a CSV file called: `wp_wpds_countries-no_match.csv`The remaining data will be consolidated into a single CSV file called: `wp_wpds_politicians_by_country.csv`.The schema for that file looks like this:| Column ||---------------------|| country || article_name || revision_id || article_quality_est || population || region || region_population |
###Code
merged_df = articles_df.merge(countries_df, how='outer', left_on='country', right_on='Name')
merged_df
# Every row that is missing a score, Name, or page will be consolidated into no_matches_df.
no_matches_df = merged_df.loc[(merged_df.score.isna()) | (merged_df.Name.isna()) | (merged_df.page.isna())]
# After dropping the no_matches, the remaining rows are valid to use for our analysis.
merged_df = merged_df.drop(index=no_matches_df.index)
merged_df = merged_df.drop(columns=['Name', 'Type', 'FIPS', 'TimeFrame', 'Data (M)', 'Unnamed: 0']).rename(columns={'page': 'article_name', 'rev_id': 'revision_id', 'score': 'article_quality_est', 'Population': 'population', 'Region': 'region'})
merged_df
no_matches_df.to_csv('../data/wp_wpds_countries-no_match.csv')
merged_df.to_csv('../data/wp_wpds_politicians_by_country.csv')
###Output
_____no_output_____
###Markdown
Step 5: AnalysisThe analysis will consist of calculating the proportion (as a percentage) of articles-per-population and high-quality articles for each country AND for each geographic region. By "high quality" articles, in this case we mean the number of articles about politicians in a given country that ORES predicted would be in either the "FA" (featured article) or "GA" (good article) classes.**Examples:**- If a country has a population of 10,000 people, and you found 10 articles about politicians from that country, then the percentage of articles-per-population would be .1%.- If a country has 10 articles about politicians, and 2 of them are FA or GA class articles, then the percentage of high-quality articles would be 20%.For the country-level analysis, we will begin by using groupby on country and article_quality_est so that we have a count of the number of articles of each level for each country. We also want to retain the country's population.
###Code
groupby_country_df = merged_df.groupby(['country', 'article_quality_est']).agg({'revision_id': 'count', 'population': 'first'})
groupby_country_df
###Output
_____no_output_____
###Markdown
We have 183 countries represented in our final dataset that have at least one Wikipedia article with an estimated score.
###Code
len(merged_df.country.unique())
###Output
_____no_output_____
###Markdown
Here we calculate the percentage of FA or GA articles per population and the percentage of high quality articles per total number of articles for that country. We will collect the results in results_by_country_df.
###Code
data_by_country = []
countries = merged_df.country.unique()
for country in countries:
if (groupby_country_df.index.isin([(country, 'FA')]).any()) | (groupby_country_df.index.isin([(country, 'GA')]).any()):
high_articles_sum = groupby_country_df.loc[(country, ['FA', 'GA']), :].revision_id.sum()
else:
high_articles_sum = 0
articles_sum = groupby_country_df.loc[(country, slice(None)), :].revision_id.sum()
country_population = groupby_country_df.loc[(country, slice(None)), :].population[0]
articles_per_pop = ( articles_sum/country_population ) * 100
high_quality = ( high_articles_sum/articles_sum ) * 100
data_by_country.append([country, articles_per_pop, high_quality])
results_by_country_df = pd.DataFrame(data_by_country, columns=['country', 'articles_per_pop', 'high_quality'])
results_by_country_df
###Output
_____no_output_____
###Markdown
Now we perform a similar aggregation and calculation for the regions. The region-level statistics will be stored in results_by_region_df.
###Code
groupby_region_df = merged_df.groupby(['region', 'article_quality_est']).agg({'revision_id': 'count', 'region_population': 'first'})
groupby_region_df
###Output
_____no_output_____
###Markdown
We have 19 unique regions.
###Code
len(merged_df.region.unique())
data_by_region = []
regions = merged_df.region.unique()
for region in regions:
if (groupby_region_df.index.isin([(region, 'FA')]).any()) | (groupby_region_df.index.isin([(region, 'GA')]).any()):
high_articles_sum = groupby_region_df.loc[(region, ['FA', 'GA']), :].revision_id.sum()
else:
high_articles_sum = 0
articles_sum = groupby_region_df.loc[(region, slice(None)), :].revision_id.sum()
region_population = groupby_region_df.loc[(region, slice(None)), :].region_population[0]
articles_per_pop = ( articles_sum/region_population ) * 100
high_quality = ( high_articles_sum/articles_sum ) * 100
data_by_region.append([region, articles_per_pop, high_quality])
results_by_region_df = pd.DataFrame(data_by_region, columns=['region', 'articles_per_pop', 'high_quality'])
results_by_region_df
###Output
_____no_output_____
###Markdown
Step 6: ResultsBelow is a summary of the results: 1. Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
results_by_country_df.nlargest(10, 'articles_per_pop', keep='all')[['country', 'articles_per_pop']]
###Output
_____no_output_____
###Markdown
2. Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
results_by_country_df.nsmallest(10, 'articles_per_pop', keep='all')[['country', 'articles_per_pop']].head(10)
###Output
_____no_output_____
###Markdown
3. Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
results_by_country_df.nlargest(10, 'high_quality', keep='all')[['country', 'high_quality']]
###Output
_____no_output_____
###Markdown
4. Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
results_by_country_df.nsmallest(10, 'high_quality', keep='all')[['country', 'high_quality']].head(10)
###Output
_____no_output_____
###Markdown
5. Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
results_by_region_df.sort_values('articles_per_pop', ascending=False)[['region', 'articles_per_pop']]
###Output
_____no_output_____
###Markdown
6. Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
results_by_region_df.sort_values('high_quality', ascending=False)[['region', 'high_quality']]
###Output
_____no_output_____
###Markdown
A2 - Bias in DataPreston StringhamThe purpose of this project is to think about bias with respect to human-centered data science. This is demonstrated by finding the distribution of the quality of articles regarding politicians from many countries using the English Wikipedia. I expect there to be many sources of bias, especially heavy bias towards developed countries that also speak English. Let's see how my intuition compares to what is found in the data. Step 1 - Data Acquisition Let's load in our data and libraries.
###Code
import pandas as pd
import matplotlib as plt
import requests
politician_df = pd.read_csv('../data/data-raw/page_data.csv')
population_df = pd.read_csv('../data/data-raw/WPDS_2020_data - WPDS_2020_data.csv')
###Output
_____no_output_____
###Markdown
Let's now observe our data.
###Code
politician_df
population_df
###Output
_____no_output_____
###Markdown
Step 2 - Data Preprocessing Some basic preprocessing is needed before getting our scores. We need to remove rows that have "Template:" in our politician dataframe as these are not necessary. In addition, the population dataframe have data labels in capital letters that we need to remove for now.
###Code
politician_df = politician_df[~politician_df.page.str.contains("Template:")]
population_df = population_df[~population_df['Name'].str.isupper()]
###Output
_____no_output_____
###Markdown
Getting Scores We are now ready to obtain our ORES scores using the corresponding ORES REST API. The endpoint for this API is below.
###Code
endpoint = 'https://ores.wikimedia.org/v3/scores/enwiki/{revid}/articlequality'
###Output
_____no_output_____
###Markdown
We create this small function to call that will allow us to add additional parameters before getting the result from the REST API in JSON format.
###Code
def api_call(endpoint,parameters):
call = requests.get(endpoint.format(**parameters))
response = call.json()
return response
###Output
_____no_output_____
###Markdown
We need to get results based on its revision ID. I instantiate a dictionary below that we will use to structure our parameters used to run the api_call method.
###Code
parameters = {'revid': '1234'}
politician_df = politician_df.reset_index()
###Output
_____no_output_____
###Markdown
This is when we actually get the scores. The potentially naive idea I have is to iterate through every single revision ID in our politician dataframe and get the results. This makes the proessing of the REST API data much more direct as we are only dealing with one response at a time. Additional logic is added since we want to take not of which articles were unable to produce a score from ORES.
###Code
for i in range(len(politician_df.index)):
rev_id = str(politician_df.at[i, 'rev_id'])
parameters['revid'] = rev_id
result = api_call(endpoint, parameters)
if 'error' in list(result['enwiki']['scores'][rev_id]['articlequality'].keys()):
politician_df.at[i, 'prediction'] = None
else:
politician_df.at[i, 'prediction'] = result['enwiki']['scores'][rev_id]['articlequality']['score']['prediction']
###Output
_____no_output_____
###Markdown
Let's check our data now that we have predictions.
###Code
politician_df
###Output
_____no_output_____
###Markdown
For any prediction of 'None' let's export this data so we are aware of which articles were not able to produce a result. 276 articles were not able to be predicted. I output a log of them below.
###Code
politician_df[politician_df['prediction'].isna()].to_csv('../data/data-errors/wp_wpds_countries-no_prediction.csv')
###Output
_____no_output_____
###Markdown
We can drop those rows now.
###Code
politician_df = politician_df.dropna()
population_df = population_df.rename(columns={'Name': 'country'})
###Output
_____no_output_____
###Markdown
We now can join our population together based on the country name.
###Code
join_df = pd.merge(politician_df, population_df, on=['country'], how='outer')
join_df[join_df['country'] == 'United States']
join_df = join_df.drop(['index', 'FIPS', 'TimeFrame', 'Data (M)', 'Type'], axis=1)
###Output
_____no_output_____
###Markdown
I now want to find any rows that could not be joined together.
###Code
no_matches_df = join_df[join_df.isna().any(axis=1) == True]
no_matches_df.to_csv('../data/data-errors/wp_wpds_countries-no_match.csv')
###Output
_____no_output_____
###Markdown
We can drop those rows too.
###Code
join_df = join_df.dropna()
###Output
_____no_output_____
###Markdown
I now simply rearrange the columns and give them better names.
###Code
join_df = join_df[['country', 'page', 'rev_id', 'prediction', 'Population']]
join_df.columns = ['country', 'article_name', 'revision_id', 'article_quality_est', 'population']
join_df.to_csv('../data/data-clean/wp_wpds_politicians_by_country.csv')
###Output
_____no_output_____
###Markdown
Step 3 - Analysis
###Code
join_df = pd.read_csv('../data/data-clean/wp_wpds_politicians_by_country.csv')
###Output
_____no_output_____
###Markdown
We can use pivot tables here to get a count of the articles.
###Code
pivot_df = pd.pivot_table(join_df,
fill_value=0,
columns=['article_quality_est'],
aggfunc={'article_quality_est': len}, index=['country']
)
pivot_df.columns = pivot_df.columns.droplevel()
pivot_df = pivot_df.reset_index()
population_df = population_df.rename(columns={'Name' : 'country'})
pivot_analysis = pd.merge(pivot_df, population_df, on='country', how='inner')
pivot_analysis.drop(['Type', 'FIPS', 'Data (M)', 'TimeFrame'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Let's take a look at our new pivoted dataframe.
###Code
pivot_analysis
###Output
_____no_output_____
###Markdown
We now find the sum of all articles as well as the sum of the high quality articles. We then find the number of articles per person and the proportion of high quality articles among all articles for that country.
###Code
pivot_analysis['total_article_count'] = pivot_analysis[['B', 'C', 'GA', 'Start', 'Stub', 'FA']].sum(axis=1)
pivot_analysis['quality_article_count'] = pivot_analysis[['GA', 'FA']].sum(axis=1)
pivot_analysis['articles_per_person'] = (pivot_analysis['total_article_count']/pivot_analysis['Population']) * 100
pivot_analysis['quality_per_article'] = (pivot_analysis['quality_article_count']/pivot_analysis['total_article_count']) * 100
###Output
_____no_output_____
###Markdown
We need to find the same information as the pivot_analysis dataframe but on a regional level. Additional processing is necessary to find this.
###Code
full_population_df = pd.read_csv('../data/data-raw/WPDS_2020_data - WPDS_2020_data.csv')
full_population_df = full_population_df.rename(columns={'Name' : 'country'})
###Output
_____no_output_____
###Markdown
One easy way we can use our data to determine whenther a row is a label for a sub-region is use the 'FIPS' column. Every country is labeled using a two letter label. Every sub-region has a full name i.e. 'North American' so we just need to fill in all the rows below a sub-region label with its corresponding sub-region label.One of the FIPS rows was NaN, so I simply fill in a value so the logic works correctly.
###Code
full_population_df['FIPS'].iloc[62] = 'FF' # One country had NaN FIPS. Had to fix.
current_region = 'NORTHERN AFRICA'
for i in range(3, len(full_population_df.index)):
if len(full_population_df.at[i, 'FIPS']) == 2 or full_population_df.at[i, 'FIPS'] == None:
full_population_df.at[i, 'Region'] = current_region
else:
current_region = full_population_df.at[i, 'FIPS']
region_df = pd.merge(pivot_analysis, full_population_df, on='country', how='outer')
region_df = region_df.drop(['FIPS', 'country', 'Type', 'TimeFrame', 'Data (M)', 'Population_x'], axis=1)
###Output
_____no_output_____
###Markdown
I now group by the region with the sum aggregate function. This should give me the sum of articles for every country in that region.
###Code
region_df = region_df.groupby(['Region']).sum()
region_df = region_df.rename(columns={'Population_y' : 'Population'})
###Output
_____no_output_____
###Markdown
We now find the sum of all articles as well as the sum of the high quality articles. We then find the number of articles per person and the proportion of high quality articles among all articles for that country. This time, at the regional level.
###Code
region_df['total_article_count'] = region_df[['B', 'C', 'GA', 'Start', 'Stub', 'FA']].sum(axis=1)
region_df['quality_article_count'] = region_df[['GA', 'FA']].sum(axis=1)
region_df['articles_per_person'] = (region_df['total_article_count']/region_df['Population']) * 100
region_df['quality_per_article'] = (region_df['quality_article_count']/region_df['total_article_count']) * 100
region_df = region_df.reset_index()
region_df = region_df.dropna()
###Output
_____no_output_____
###Markdown
Step 4 - Results Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
pivot_analysis[['country', 'articles_per_person', 'Population']].sort_values('articles_per_person', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
pivot_analysis[['country', 'articles_per_person', 'Population']].sort_values('articles_per_person', ascending=True).head(10)
###Output
_____no_output_____
###Markdown
Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
pivot_analysis[['country', 'quality_per_article', 'total_article_count']].sort_values('quality_per_article', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
pivot_analysis[['country', 'quality_per_article']].sort_values('quality_per_article', ascending=True).head(10)
###Output
_____no_output_____
###Markdown
Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
region_df[['Region', 'articles_per_person']].sort_values('articles_per_person', ascending=False)
###Output
_____no_output_____
###Markdown
Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
region_df[['Region', 'quality_per_article']].sort_values('quality_per_article', ascending=False)
###Output
_____no_output_____
###Markdown
Bias on WikipediaThe goal of this assignment is to explore the concept of 'bias' through data on Wikipedia articles - specifically, articles on political figures from a variety of countries. * perform an analysis of how the coverage of politicians on Wikipedia and the quality of articles about politicians varies between countries* list the countries with the greatest and least coverage of politicians on Wikipedia compared to their population.* list the countries with the highest and lowest proportion of high quality articles about politicians. ORES requestORES(Objective Revision Evaluation Service) is an artificial intelligence system used to identify vandalism on Wikipedia and distinguish it from good faith edits. References* https://wiki.communitydata.cc/HCDS_(Fall_2017)/AssignmentsA2:_Bias_in_data* https://en.wikipedia.org/wiki/Aaron_Halfaker* https://www.mediawiki.org/wiki/ORES Data Sources* http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14* https://figshare.com/articles/Untitled_Item/5513449 Setup
###Code
#import required libraries
import requests
import json
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Importing the other data is just a matter of reading CSV files in! (and for the R programmers - we'll have an R example up as soon as the Hub supports the language). Step1: Getting the article and population data* Wikipedia articles data is downloaded from from 'figshare'. This project contains data on most English-language Wikipedia articles within the category "Category:Politicians by nationality" and subcategories, along with the code used to generate that data* Population data is downloaded from Population Reference Bureau(PRB). The url is provided in 'Data Sources' section.
###Code
# downloaded from figshare
wiki_data = pd.read_csv('data-512-a2/data/raw/page_data.csv')
# downloaded from Population Reference Bureau
population_data = pd.read_csv('data-512-a2/data/raw/Population Mid-2015.csv', header = 2)
wiki_data.head()
population_data = population_data.drop('Footnotes',1)
population_data.head()
###Output
_____no_output_____
###Markdown
Step 2: Getting article quality predictionsORES estimates the quality of an article (at a particular point in time), and assigns a series of probabilities that the article is in one of 6 quality categories. The options are, from best to worst:* FA - Featured article(high quality)* GA - Good article(high quality)* B - B-class article* C - C-class article* Start - Start-class article* Stub - Stub-class article
###Code
# function to get article quality prediction(provided in class from Oliver Keyes)
def get_ores_data(revision_ids, headers):
# Define the endpoint
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revids}'
# Define parameters
params = {'project' : 'enwiki',
'model' : 'wp10',
'revids' : '|'.join(str(x) for x in revision_ids)
}
api_call = requests.get(endpoint.format(**params))
response = api_call.json()
for x in revision_ids:
item = str(x)
# extract prediction from json
prediction.append(response['enwiki']['scores'][item]['wp10']['score']['prediction'])
revision_id.append(x)
# Call get_ores_data() by providing 100 revision ids at a time
response = []
prediction = []
country =[]
article_name = []
revision_id = []
i=0
while(i < len(wiki_data)):
try:
example_ids = []
j = 0
while(j < 100):
example_ids.append(wiki_data['rev_id'][i])
j = j + 1
i = i + 1
get_ores_data(example_ids, headers)
except Exception:
pass
#print prediction and revision_id length
len(prediction)
len(revision_id)
# merge prediction and revision id which we got from get_ores_data()
wiki_df = pd.DataFrame({'article_quality':prediction, 'rev_id':revision_id})
# merge the dataframe with prediction and revision_id with wikipedia data
wiki_merged_df = wiki_df.merge(wiki_data, left_on='rev_id', right_on='rev_id', how='inner')
wiki_merged_df.head()
# write wikipedia data along with article quality in a single dataframe
wikipedia_data = pd.DataFrame({
'country':wiki_merged_df['country'],
'article_name': wiki_merged_df['page'],
'revision_id': wiki_merged_df['rev_id'],
'article_quality': wiki_merged_df['article_quality']
} )
# write population data in a single dataframe, this will be merged with wikipedia data
pop_data = pd.DataFrame({
'country':population_data['Location'],
'population': population_data['Data']})
pop_data.head()
###Output
_____no_output_____
###Markdown
Step 3: Combining the datasets
###Code
#merge wikipedia and population data
final_merged_df = wikipedia_data.merge(pop_data, left_on='country', right_on='country', how='inner')
# resulting data on merging wikipedia and population data
final_merged_df.head()
# write merged dataframe in csv file
final_merged_df.to_csv('article_quality_data_with_population.csv', sep=',')
###Output
_____no_output_____
###Markdown
Step 4: Analysis Step 4(a): Number of politician articles as a proportion of country population
###Code
# get count of articles by country, this is same as groupby in sql
articles_by_country = final_merged_df.groupby('country').count()['article_name'].astype(int).reset_index()
articles_by_country = pd.DataFrame({'country':articles_by_country['country'], 'articles_count':articles_by_country['article_name']})
articles_by_country.head()
# merge grouped data with population data
articles_proportion = articles_by_country.merge(pop_data, left_on='country', right_on='country', how='inner')
articles_proportion['percentage'] = articles_proportion['articles_count']*100/articles_proportion['population']
# sort the data in descending order to get list of top 10 and bottom 10 countries
rank_of_countries =articles_proportion.sort_values(['percentage'], ascending=[False])
rank_of_countries = rank_of_countries.dropna()
#10 highest-ranked countries in terms of number of politician articles as a proportion of country population
rank_of_countries.head(10)
#10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
rank_of_countries.tail(10)
###Output
_____no_output_____
###Markdown
Step 4(b): Number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
# get count of high quality articles by country by selecting rows with 'FA' and 'GA' and then grouping on country
GA_FA_quality = pd.concat([final_merged_df.loc[final_merged_df['article_quality']=='FA'],
final_merged_df.loc[final_merged_df['article_quality']=='GA']])
GA_FA_quality = GA_FA_quality.groupby('country').count()['article_name'].reset_index()
GA_FA_quality = pd.DataFrame({'country':GA_FA_quality['country'], 'GA_FA_articles_count':GA_FA_quality['article_name']})
GA_FA_quality.head()
# merge grouped data with articles by country data
GA_FA_articles_proportion = GA_FA_quality.merge(articles_by_country, left_on='country', right_on='country', how='inner')
GA_FA_articles_proportion['percentage_of_GA_FA'] = GA_FA_quality['GA_FA_articles_count']*100/GA_FA_articles_proportion['articles_count']
GA_FA_articles_proportion.head()
## sort the data in descending order to get list of top 10 and bottom 10 countries
rank_of_countries_by_GA_FA =GA_FA_articles_proportion.sort_values(['percentage_of_GA_FA'], ascending=[False])
#10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
rank_of_countries_by_GA_FA.head(10)
#10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
rank_of_countries_by_GA_FA.tail(10)
###Output
_____no_output_____
###Markdown
DATA 512 Homework 2: Bias in DataFall 2021Author: Dwight Sablan Background The goal of this assignment is to explore the concept of bias through data on Wikipedia articles - specifically, articles on political figures from a variety of countries. I will combine a dataset of Wikipedia articles with a dataset of country populations, and use a machine learning service called ORES to estimate the quality of each article. I perform an analysis of how the coverage of politicians on Wikipedia and the quality of articles about politicians varies between countries. Step 1: Getting the Article and Population Data The first step is getting the data, which lives in several different places. Dataset 1: The Wikipedia politicians by country dataset can be found on Figshare. We download and unzip the data file named page_data.csv.Dataset 2: The population data is available in CSV format as WPDS_2020_data.csv. This dataset is drawn from the world population data sheet published by the Population Reference Bureau. IMPORT DEPENDENCIES
###Code
import pandas as pd
import numpy as np
import json
import requests
###Output
_____no_output_____
###Markdown
READ IN THE TWO DATASETS
###Code
politician_data = pd.read_csv('page_data.csv')
#print dataframe shape
display(politician_data.shape)
#print first 5 rows
display(politician_data.head())
population_data = pd.read_csv('WPDS_2020_data.csv')
#print dataframe shape
display(population_data.shape)
#print first five rows
display(population_data.head())
###Output
_____no_output_____
###Markdown
Step 2: Cleaning the Data In the politician dataset, filter out the page names that contain the string 'Template' as they won't be needed in the analysis.
###Code
politician_data_cleaned = politician_data[~ politician_data.page.str.contains("Template")]
display(politician_data_cleaned.shape)
display(politician_data_cleaned.head())
###Output
_____no_output_____
###Markdown
In the population dataset, separate cumulative regional population counts and country-level counts. The regional population rows are denoted with characters in all caps. Ex: OCEANIA
###Code
#apply the isupper function to the Name column
regional_population = population_data[population_data['Name'].apply(lambda x: x.isupper())]
display(regional_population.shape)
display(regional_population.head())
#get the inverse of the regional_population dataset to get the country-level populations
country_population = population_data[~population_data['Name'].apply(lambda x: x.isupper())]
display(country_population.shape)
display(country_population.head())
###Output
_____no_output_____
###Markdown
Step 3: Getting Article Quality Predictions Now we need to get the predicted quality scores for each article in the Wikipedia dataset. To do so, we use a using a machine learning system called ORES. ORES is a machine learning tool that can provide estimates of Wikipedia article quality. The article quality estimates are, from best to worst:- FA - Featured article- GA - Good article- B - B-class article- C - C-class article- Start - Start-class article- Stub - Stub-class article These were learned based on articles in Wikipedia that were peer-reviewed using the Wikipedia content assessment procedures. These quality classes are a sub-set of quality assessment categories developed by Wikipedia editors. For a given rev_id, ORES will assign one of these 6 categories. Use the REST API which provide access to a set of scoring models. This is how we'll get the article preductions. SET USER-AGENT AND ENDPOINT TO RETREIVE DATA
###Code
headers = {
'User-Agent': 'https://github.com/dwightsablan16',
'From': '[email protected]'
}
endpoint = 'https://ores.wikimedia.org/v3/scores/enwiki?models=articlequality&revids={revid}'
###Output
_____no_output_____
###Markdown
DEFINE FUNCTION TO CALL API GET SCORES DATA
###Code
def api_call(endpoint, parameters):
call = requests.get(endpoint.format(**parameters), headers=headers)
response = call.json()
return response
###Output
_____no_output_____
###Markdown
DEFINE FUNCTION TO GET THE PREDICTIONS FOR EACH REV_ID
###Code
#Create list for predictions
predictions_list = []
def get_prediction(article_scores):
for i in article_scores['enwiki']['scores']:
#if there exists a prediction
if 'score' in article_scores['enwiki']['scores'][i]['articlequality']:
#get the article quality prediction
article_quality = article_scores['enwiki']['scores'][i]['articlequality']['score']['prediction']
#add prediction
predictions_list.append(article_quality)
else :
#Add 'no_pred' for articles with no predictions
predictions_list.append('no_pred')
###Output
_____no_output_____
###Markdown
INGEST THE DATA
###Code
#set initial index
begin_point = 0
while begin_point < politician_data_cleaned.shape[0]:
#set intervals of data ingestion
ingest_range = begin_point + 50
#set end index
end_point = min(ingest_range, politician_data_cleaned.shape[0])
#set parameters for API
parameters = {'context' : 'enwiki',
'revid' : '|'.join (str(x) for x in politician_data_cleaned['rev_id'][begin_point:end_point]),
'model' : 'articlequality'
}
#call api to get scores for corresponding indices
scores = api_call(endpoint, parameters)
#adjust beginning point to get next 50 responses
begin_point = begin_point + 50
#call function to store prediction value into prediction list
get_prediction(scores)
#Add the predictions to the dataframe
politician_data_cleaned['prediction'] = predictions_list
#View politician data frame
politician_data_cleaned.head()
###Output
_____no_output_____
###Markdown
GET THE PAGES WITH NO PREDICTIONS AND SAVE AS CSV
###Code
#get all the rev_id's with no prediction
no_prediction_data = politician_data_cleaned[politician_data_cleaned.prediction == 'no_pred']
#save as csv
no_prediction_data.to_csv('no_prediction_data.csv')
#remove articles in the dataset where we have no_pred
politician_data_cleaned = politician_data_cleaned[politician_data_cleaned.prediction != 'no_pred']
###Output
_____no_output_____
###Markdown
Step 4: Combining the Datasets Some processing of the data will be necessary! In particular, you'll need to - after retrieving and including the ORES data for each article - merge the wikipedia data and population data together. Both have fields containing country names for just that purpose. After merging the data, you'll invariably run into entries which cannot be merged. Either the population dataset does not have an entry for the equivalent Wikipedia country, or vise versa.
###Code
#rename column name 'Name' in country_population dataset to merge on 'country'
country_population = country_population.rename(columns={'Name': 'country'})
###Output
_____no_output_____
###Markdown
MERGE DATA
###Code
#data where country is in both datasets
merged_data = politician_data_cleaned.merge(country_population, how = 'outer' ,indicator=True).loc[lambda x : x['_merge'] == 'both']
#data where country is only in politician dataset
left_only_data = politician_data_cleaned.merge(country_population, how = 'outer' ,indicator=True).loc[lambda x : x['_merge'] == 'left_only']
#data where country is only in right dataset
right_only_data = politician_data_cleaned.merge(country_population, how = 'outer' ,indicator=True).loc[lambda x : x['_merge'] == 'right_only']
#aggregate data with no matches
no_match_data = left_only_data.append(right_only_data, ignore_index=True)
###Output
_____no_output_____
###Markdown
RENAME AND REMOVE UNNECESSARY COLUMNS
###Code
#rename columns
merged_data = merged_data.rename(columns = {'page': 'article_name', 'rev_id': 'revision_id', 'prediction': 'article_quality_est.', 'Population': 'population'})
no_match_data = no_match_data.rename(columns = {'page': 'article_name', 'rev_id': 'revision_id', 'prediction': 'article_quality_est.', 'Population': 'population'})
#drop columns not needed
dropped_cols = ['FIPS', 'Type', 'TimeFrame', 'Data (M)', '_merge']
merged_data = merged_data.drop(labels = dropped_cols, axis = 1)
no_match_data = no_match_data.drop(labels = dropped_cols, axis = 1)
###Output
_____no_output_____
###Markdown
SAVE DATA
###Code
#save data with matches
merged_data.to_csv('wp_wpds_politicians_by_country.csv')
#save data with no matches
no_match_data.to_csv('wp_wpds_countries-no_match.csv')
###Output
_____no_output_____
###Markdown
Step 5: Analysis The analysis will consist of calculating the proportion (as a percentage) of articles-per-population and high-quality articles for each country AND for each geographic region. By "high quality" articles, in this case we mean the number of articles about politicians in a given country that ORES predicted would be in either the "FA" (featured article) or "GA" (good article) classes.
###Code
#create a dataframe with just high quality articles
high_quality_pages = merged_data[(merged_data['article_quality_est.'] == 'FA') | (merged_data['article_quality_est.'] == 'GA') ]
###Output
_____no_output_____
###Markdown
PROPORTION OF ARTICLES PER POPULATION FOR EACH COUNTRY
###Code
#country population
country_pop = merged_data.groupby(['country'])['population'].mean()
#number of articles
page_count = merged_data.groupby(['country'])['article_name'].count()
#articles per country
pages_per_country = page_count/country_pop*100
pages_per_country
###Output
_____no_output_____
###Markdown
PROPORTION OF HIGH QUALITY ARTICLES PER POPULATION FOR EACH COUNTRY
###Code
#country population
high_country_pop = high_quality_pages.groupby(['country'])['population'].mean()
#number of articles
high_page_count = high_quality_pages.groupby(['country'])['article_name'].count()
#articles per country
high_pages_per_country = high_page_count/high_country_pop*100
high_pages_per_country
###Output
_____no_output_____
###Markdown
RELATIVE PROPORTION OF POLITICIAN ARTICLES THAT ARE OF GA-FA QUALITY
###Code
relative_high_pages = high_page_count/page_count
relative_high_pages
###Output
_____no_output_____
###Markdown
Step 6: Results The following results are produced in the following 6 tables:- Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population- Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population- Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality- Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality- Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population- Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality Top 10 countries by coverage
###Code
top_ten_pages_per_country = pages_per_country.sort_values(ascending=False)[0:10]
top_ten_pages_per_country
###Output
_____no_output_____
###Markdown
Bottom 10 countries by coverage
###Code
bottom_ten_pages_per_country = pages_per_country.sort_values(ascending=True)[0:10]
bottom_ten_pages_per_country
###Output
_____no_output_____
###Markdown
Top 10 countries by quality
###Code
top_ten_high_pages_per_country = high_pages_per_country.sort_values(ascending=False)[0:10]
top_ten_high_pages_per_country
###Output
_____no_output_____
###Markdown
Bottom 10 countries by quality
###Code
top_ten_high_pages_per_country = high_pages_per_country.sort_values(ascending=True)[0:10]
top_ten_high_pages_per_country
###Output
_____no_output_____
###Markdown
Top 10 countries by relative quality
###Code
top_ten_relative_high_pages_per_country = relative_high_pages.sort_values(ascending=False)[0:10]
top_ten_relative_high_pages_per_country
###Output
_____no_output_____
###Markdown
Bottom 10 countries relative by quality
###Code
bottom_ten_relative_high_pages_per_country = relative_high_pages.sort_values(ascending=True)[0:10]
bottom_ten_relative_high_pages_per_country
###Output
_____no_output_____
###Markdown
Exploring the concept of bias in data through Wikipedia articles Overview In this work I explore the concept of bias in data through wikipedia articles. For this purpose, I use a publicly available [dataset of wikipedia articles about politicians from different countries](https://figshare.com/articles/Untitled_Item/5513449) and also take advantage of a machine learning web service called [ORES](https://www.mediawiki.org/wiki/ORES) to estimate the quality of each of these articles. I then combine this data with another publicly available dataset of country populations (with population information as of Mid-2015) from the [Population Research Bureau website](http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14). With the combined dataset, then I venture out to perform a tabular format visualization of the following: 1. 10 highest-ranked countries in terms of number of politician articles as a proportion of country population2. 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population3. 10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country4. 10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country Notebook flow The following sections of this notebook are organized in the below format for a step-by-step walkthrough of the different activities performed to achieve the desired result:1. Loading Prerequisite Libraries and declaring variables2. Data Acquisition3. Data Processing4. Data VisualizationMost of the steps in this notebook can be repeated and followed along if readers want to reproduce the work for themselves. Loading Prerequisite Libraries and declaring commonly used values as variables In this section, I have the code to load a few modules from some of the publicly available Python libraries that serve as good helper methods for use throughout the rest of the code in this notebook. The purpose of each of the library is described in brief as comments inline with the code. Also, I like to define upfront all the variables and settings I would be using in this notebook so they are all together for easy access as well as gives reader who might be following along running this code a chance to modify these values (for example, file names) as they please without impacting the flow of the rest of the code.
###Code
#json library has some good helper methods for working with JSON objects.
#Since our raw data is in json format, we need the ability to deserialize json data into python objects for consumption.
import json
#Periodically we would need a way to check the intermediate results. We do that by printing the values of the variables.
#This is done using the display module from the IPython.core.display library.
from IPython.core.display import display
#pandas is another super useful python library that has many valueable data storage and manipulation functions.
import pandas as pd
#numpy for using na
import numpy as np
#requests module would be used to retrieve the data from the REST API endpoints
import requests
#module used during printing exception info
import sys
#Directory where the raw files exist
raw_data_dir = './data/raw/'
#Directory where the processed file will be saved to (at the end of this notebook if all steps are successful)
processed_data_dir = './data/processed/'
#Variables to hold the file names of the raw data files
page_data_file = 'page_data.csv'
page_data_with_scores_file = 'page_data_scores.csv'
population_mid_2015_file = 'Population Mid-2015.csv'
#Variables to hold the file names to contain the processed data at the end of successful execution of the steps in this notebook.
page_data_with_scores_and_population_file = 'page_data_with_scores_and_population.csv'
#header values that are required to be passed to the API.
#NOTE: You are strongly advised to modify these values to point to your github url and account if you plan on running this code
headers = {'User-Agent' : 'https://github.com/sumanbhagavathula', 'From' : '[email protected]'}
project = 'enwiki'
model = 'wp10'
#REST API endpoint for ORES
endpoint = 'https://ores.wikimedia.org/v3/scores/{project}/?models={model}&revids={revid}'
#batch size of revid list to speed up retrieving the scores using ORES endpoint
batch_size=50
#preventing scientific notation of numbers in the results of executions in this notebook, for readability
pd.set_option('precision',20)
###Output
_____no_output_____
###Markdown
Data Acquisition Steps In this section, I have the code to acquire the three datasets (two raw datasets and the third that is the ORES scores for the final revision ids of the articles) that are required for this analysis work. More information along with provenance is provided in the overview section. Note, I had downloaded the two raw datasets from the respective websites, and included the wikipedia articles dataset along with this repository. The dataset on country population has restrictions on access and hence I have not included in this repository. So, if you like to follow along you will need to download the file yourself directly from the source website and save it to the /data/raw directory to be able to run the most of the steps of this notebook. Please also note the format of the calls to the ORES machine learning API that provides an estimate of the article quality. We can either pass in one revision id per API call or can pipe a batch of them, the latter will speed up the retrieval process and is used in this work. However, there is a limit to how many can be piped at a time. Whie I do not know the exact limit, I have used a batch size of 50 as you may have noticed in the variable declaration section. You may modify that value to a different number to experiemnt out other batch sizes if you are interested. You may also choose to skip the ORES API calls and proceed to the next step in the order to use the offline copy made available in this repository to avoid having to wait while retrieving the scores again. importing the two datasets
###Code
#import the datasets
page_data = pd.read_csv(raw_data_dir+page_data_file)
display(page_data.head())
#NOTE: Population Mid-2015 is copyrighted by PRB and is not included in this repository. TBD: include a link.
population_mid_2015 = pd.read_csv(raw_data_dir+population_mid_2015_file)
display(population_mid_2015.head())
###Output
_____no_output_____
###Markdown
wrapping the ORES score retrieval mechanism into a function that can be reused any number of times as needed and testing its functionality using some sample values
###Code
#function to call ORES endpoint and retrieve the predicted quality score
#TBD: Add exception handling if possible
def predicted_ores_score(pipedrevids):
params = {'project' : project,
'model' : model,
'revid' : (pipedrevids)
}
api_call = requests.get(endpoint.format(**params))
try:
response = api_call.json()['enwiki']['scores']
return(response)
except:
print("Unexpected error:", sys.exc_info()[0])
return
#for testing purposes: sample call, can comment out after testing
output = (predicted_ores_score('798538579|798539797|798541884|798544723|798548287|798550386|798552371|798552999|798553325|798553329|798553416|798555546|798555786|798555984|798556097|798556283|798556613|798558498|798558692|798560330|798560381|798561197|798561595|798561903|798564951|798565577|798565999|798566821|798567091|798567112|798567496|798569014|798569398|798570513|798571014|798574254|798574475|798575514|798576875|798577626|798578057|798578265|798579045|798579775|798580067|798582884|798584054|798584996|798585322|798588458'))
display(output)
for key in output:
print('revision_id:' + str(key) + ', score:' + output[key][model]['score']['prediction'])
###Output
_____no_output_____
###Markdown
wrapping the steps required to save ORES Scores into a function for any time reuse as needed
###Code
def save_ores_scores(edits_ores_scores_batch_json, edits_ores_scores):
for key in edits_ores_scores_batch_json:
if(str(edits_ores_scores_batch_json[key][model]).find('RevisionNotFound')!=-1):
print(edits_ores_scores_batch_json[key][model])
else:
edits_ores_scores.append({'revision_id':key,'score':edits_ores_scores_batch_json[key][model]['score']['prediction']})
return edits_ores_scores
###Output
_____no_output_____
###Markdown
retrieve ORES Scores for article revisions in the wikipedia articles dataset
###Code
#if you wish to use the already downloaded dataset and not rerun the ORES endpoint to save time, please skip this step
#and run the optional next step to load the offline dataset that I have saved during my execution.
#in batches of batch_size, call ORES endpoint and retrieve the scores
pipedrevids = ''
edits_ores_scores_list = []
for i in range(0,len(page_data['rev_id'])):
pipedrevids = pipedrevids + (str(page_data.loc[i,'rev_id']))
if i==len(page_data['rev_id'])-1:
edits_ores_scores_batch_json = predicted_ores_score(pipedrevids)
edits_ores_scores_list = save_ores_scores(edits_ores_scores_batch_json,edits_ores_scores_list)
pipedrevids = ''
break;
elif i == 0 or i%batch_size != 0:
pipedrevids = pipedrevids + '|'
else:
edits_ores_scores_batch_json = predicted_ores_score(pipedrevids)
edits_ores_scores_list = save_ores_scores(edits_ores_scores_batch_json,edits_ores_scores_list)
pipedrevids = ''
#for testing purposes: break condition to speed up testing, can comment out after testing
#if i == 2400:
#break;
###Output
{'error': {'message': 'RevisionNotFound: Could not find revision ({revision}:806811023)', 'type': 'RevisionNotFound'}}
{'error': {'message': 'RevisionNotFound: Could not find revision ({revision}:807367030)', 'type': 'RevisionNotFound'}}
{'error': {'message': 'RevisionNotFound: Could not find revision ({revision}:807367166)', 'type': 'RevisionNotFound'}}
{'error': {'message': 'RevisionNotFound: Could not find revision ({revision}:807484325)', 'type': 'RevisionNotFound'}}
###Markdown
save the revision ids' ORES scores into a separate file for offline usage to speed during reproducing effort of this article or when ORES endpoint is unavailable
###Code
#convert the scores list to pandas dataframe for easier processing operations
edits_ores_scores = pd.DataFrame(edits_ores_scores_list)
#save the dataset with the ORES scores
edits_ores_scores.to_csv(raw_data_dir+page_data_with_scores_file, index=False)
#display the first few rows to get an idea about how the dataset looks like.
display(edits_ores_scores.head())
###Output
_____no_output_____
###Markdown
Data Processing Steps In this section, I perform the required steps to clean up the data (by renaming some columns, dropping some columns and filtering as needed), transform the data (modify data types of columns, join datasets as needed). At the end of this section, we will have a final data structure that can be used for our Analysis purposes. We start with an optional step to load the ORES scores dataset that was saved to an offline folder in the previous step. This is to faciliate some readers who might be following along running the code in this notebook but preferred to skip the time consuming process of retrieving ORES scores or if ORES scoring API is unavailable for some reason. load the ORES scores dataset
###Code
#As mentioned above, this is an optional step. If you are following along and have run the previous step
#to retrieve the scores using ORES api and assuming that was successful, you can skip this step
#otherwise, run this to load the page scores dataset.
edits_ores_scores = pd.read_csv(raw_data_dir+page_data_with_scores_file)
#display the first few rows to make sure the dataset is loaded.
display(edits_ores_scores.head())
###Output
_____no_output_____
###Markdown
rename rev_id column in the page data dataset to revision_id to facilitate merge operation in next step
###Code
#In the page data dataset, rename last_edit column to revision_id
#so as to have a common name with the edit scores dataset and be able to join in next steps
page_data = page_data.rename(columns={"rev_id":"revision_id"})
#display the first few rows to see the renamed column in the dataframe
display(page_data.head())
###Output
_____no_output_____
###Markdown
merge page data and ORES scores datasets to get a combined dataset that has ORES scores for wikipedia articles, where available
###Code
#combine the page data and edit scores datasets
page_data_with_scores = page_data.merge(edits_ores_scores,how='inner',on=['revision_id'])
#display the first few rows of the merged dataset
display(page_data_with_scores.head())
###Output
_____no_output_____
###Markdown
rename Location column in population dataset to facilitate join with the page data and scores dataset in next step. Also convert the Data column from string to int type since population units is in numbers and not strings
###Code
#In the population dataset, rename Location column to country
#so as to have a common name with the edit scores dataset and be able to join in next steps
population_mid_2015 = population_mid_2015.rename(columns={"Location":"country"})
population_mid_2015['Data'] = pd.to_numeric(population_mid_2015['Data'].str.replace(',',''))
#display the first few rows to see the renamed column in the dataframe
display(population_mid_2015.head())
###Output
_____no_output_____
###Markdown
merge the page scores and population data to get the final data together into one dataset and for further processing
###Code
#combine the page data and edit scores datasets
page_data_with_scores_and_population = page_data_with_scores.merge(population_mid_2015)
#display the first few rows of the merged dataset
display(page_data_with_scores_and_population.head())
###Output
_____no_output_____
###Markdown
retain only the relevant columns and rename fields as needed to get the final dataset ready for analysis use
###Code
#keep only the necessary columns in the dataset
page_data_with_scores_and_population = page_data_with_scores_and_population.loc[:,('country','page','revision_id','score','Data')]
#Rename the columns to form the final dataset
page_data_with_scores_and_population = page_data_with_scores_and_population.rename(columns={'page':'article_name','score':'article_quality','Data':'population'})
#display the first few rows of the final dataset
page_data_with_scores_and_population.head()
###Output
_____no_output_____
###Markdown
save the final dataset for offline analysis and usage
###Code
#save this dataset
page_data_with_scores_and_population.to_csv(processed_data_dir+page_data_with_scores_and_population_file,index=False)
###Output
_____no_output_____
###Markdown
Data Analysis Steps This section consists of the steps and code required to perform the relevant aggregations and joins that are needed to perform the analysis that was the focus of this notebook. These summary views will then be required in the next section to come up with our final tabular format visualizations. get the summary of all politician articles count and total population as of mid-2015 for all countries where at least one article was published. Other countries that may have existed in the PRB dataset will be skipped from here
###Code
country_revisionid=page_data_with_scores_and_population.loc[:,('country','revision_id')]
country_revisionid_count = country_revisionid.groupby(by='country',as_index=False).count()
country_allarticlescount = country_revisionid_count.rename(columns={'revision_id':'all_articles_count'})
country_population_raw = page_data_with_scores_and_population.loc[:,('country','population')]
country_population = country_population_raw.groupby(by='country',as_index=False).max()
country_population_allarticlescount = country_allarticlescount.merge(country_population)
country_population_allarticlescount.head()
###Output
_____no_output_____
###Markdown
get the summary of the high quality articles (for politicians) and total population as of mid-2015 for all countries where at least one article was published. Other countries that may have existed in the PRB dataset will be skipped from here
###Code
country_articlequality_revisionid=page_data_with_scores_and_population.loc[:,('country','article_quality','revision_id')]
country_highqualityarticles_revisionid = (country_articlequality_revisionid
[(country_articlequality_revisionid['article_quality']=='GA')
|(country_articlequality_revisionid['article_quality']=='FA')])
country_revisionid_filtered = country_highqualityarticles_revisionid.loc[:,('country','revision_id')]
country_highqualityarticlecount_raw = country_revisionid_filtered.groupby(by='country',as_index=False).count()
country_highqualityarticlecount = (country_highqualityarticlecount_raw
.rename(columns={'revision_id':'highquality_articles_count'}))
country_population_allandhighqualityarticlecount = (country_population_allarticlescount.merge
(country_highqualityarticlecount,how='left',on='country'))
country_population_allandhighqualityarticlecount.head()
###Output
_____no_output_____
###Markdown
since some countries may not contain even a single high quality article, we replace NaN with zero for such articles. Also, change the data type for the high quality articles count to int since counts can only be integer numbers
###Code
#replace NaN in highquality_articles_count with zeros
country_population_allandhighqualityarticlecount['highquality_articles_count'].fillna(int(0), inplace=True)
country_population_allandhighqualityarticlecount['highquality_articles_count'] = (country_population_allandhighqualityarticlecount
['highquality_articles_count'].astype(int))
###Output
_____no_output_____
###Markdown
now we calculate the proportion of articles in each category we define and calculate proportion of articles per population as the ratio of the number of articles to the total population for that country
###Code
country_population_allandhighqualityarticlecount['articles_per_population'] = (
country_population_allandhighqualityarticlecount['all_articles_count']*100.0
/country_population_allandhighqualityarticlecount['population'])
country_population_allandhighqualityarticlecount.head()
###Output
_____no_output_____
###Markdown
and then we define and calculate proportion of high quality articles to all articles count for that country
###Code
country_population_allandhighqualityarticlecount['highqualityarticles_percentage'] = (
country_population_allandhighqualityarticlecount['highquality_articles_count']*100.0
/country_population_allandhighqualityarticlecount['all_articles_count'])
country_population_allandhighqualityarticlecount.head()
###Output
_____no_output_____
###Markdown
retain only the relevant columns that are needed for the next section on Visualization
###Code
country_all_and_highquality_articles_per_population = (
country_population_allandhighqualityarticlecount.loc[:,('country','articles_per_population'
,'highqualityarticles_percentage')])
country_all_and_highquality_articles_per_population.head()
###Output
_____no_output_____
###Markdown
Data Visualization Steps In this section, we perform the relevant steps for coming up with the four visualization we set forth with at the beginning of this notebook. Note that these visualizations are very simple tabular format reports with no sophistication. ten highest ranked countries in terms of number of politician articles as proportion of country population
###Code
(pd.DataFrame(country_all_and_highquality_articles_per_population
.sort_values(by='articles_per_population',ascending=False)
.loc[:,('country','articles_per_population')]
.head(10)
.values,columns=['country','articles_per_population']))
###Output
_____no_output_____
###Markdown
ten lowest ranked countries in terms of number of politician articles as proportion of country population
###Code
(pd.DataFrame(country_all_and_highquality_articles_per_population
.sort_values(by='articles_per_population',ascending=True)
.loc[:,('country','articles_per_population')]
.head(10)
.values,columns=['country','articles_per_population']))
###Output
_____no_output_____
###Markdown
10 highest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
(pd.DataFrame(country_all_and_highquality_articles_per_population
.sort_values(by='highqualityarticles_percentage',ascending=False)
.loc[:,('country','highqualityarticles_percentage')]
.head(10)
.values,columns=['country','highqualityarticles_percentage']))
###Output
_____no_output_____
###Markdown
10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
###Code
(pd.DataFrame(country_all_and_highquality_articles_per_population
.sort_values(by='highqualityarticles_percentage',ascending=True)
.loc[:,('country','highqualityarticles_percentage')]
.head(10)
.values,columns=['country','highqualityarticles_percentage']))
###Output
_____no_output_____
###Markdown
A2 - Bias in Data We will explore the concept of bias through data on Wikipedia articles - specifically, articles on political figures from a variety of countries. In this notebook, we will combine a dataset of Wikipedia articles with a dataset of country populations, and use a machine learning service called ORES to estimate the quality of each article. Step 1: Data acquisition In this section, we will load the required dataset for the analysis. We will use 2 datasets:* Wikipedia politicians by country dataset - [FigShare Link](https://figshare.com/articles/dataset/Untitled_Item/5513449)* World population data sheet - [Population Reference Bureau](https://www.prb.org/international/indicator/population/table/)We have downloaded the dataset files into the repository from the sources above:* Wikipedia politicians by country dataset - data/page_data.csv* World population data sheet - data/world_population.csv
###Code
import pandas as pd
# Load Wikipedia politicians by country dataset
politicians_dataset = pd.read_csv("../data/page_data.csv")
# Load World population data sheet
world_population_dataset = pd.read_csv("../data/world_population.csv")
###Output
_____no_output_____
###Markdown
Step 2: Data cleaning In this step we will perform some cleaning on the datasets obtained from the previous step. Specifically, we will:For the Wikipedia politicians by country dataset:* Remove all rows that start with `Template:`For World population data sheet:* Seperate country records and sub-region records to different files
###Code
politicians_dataset_cleaned = politicians_dataset[~politicians_dataset.page.str.startswith("Template:")]
world_population_dataset_cleaned_country = world_population_dataset[~world_population_dataset.Name.str.isupper()]
world_population_dataset_cleaned_sub_region = world_population_dataset[world_population_dataset.Name.str.isupper()]
###Output
_____no_output_____
###Markdown
Then we will store the cleaned datasets in `data-cleaned/`:* Cleaned Wikipedia politicians by country dataset: `data-cleaned/page_data_cleaned.csv`* Cleaned World population data sheet (Country): `data-cleaned/world_population_cleaned_country.csv`* Cleaned World population data sheet (Sub-Region): `data-cleaned/world_population_cleaned_sub_region.csv`
###Code
politicians_dataset_cleaned.to_csv('../data-cleaned/page_data_cleaned.csv')
world_population_dataset_cleaned_country.to_csv('../data-cleaned/world_population_cleaned_country.csv')
world_population_dataset_cleaned_sub_region.to_csv('../data-cleaned/world_population_cleaned_sub_region.csv')
###Output
_____no_output_____
###Markdown
Step 3: Getting Article Quality Predictions For this step, we will run the "ORES" model on the `rev_ids` of the Wikipedia pages within the politicians dataset. I was not able to install the ORES package through pip, [official repository](https://github.com/wikimedia/ores). Hence I will query the predictions through the [REST API](https://ores.wikimedia.org/v3/!/scoring/get_v3_scores_context_revid_model) Next we will run ORES on the politicians dataset
###Code
import requests
import pickle
from tqdm import tqdm
def get_prediction(rev_id):
headers = {
'User-Agent': 'https://github.com/wanggy0201',
'From': '[email protected]'
}
config = "https://ores.wikimedia.org/v3/scores/enwiki?models=articlequality&revids=" + str(rev_id)
if requests.get(config, headers=headers).status_code == 200:
return requests.get(config, headers=headers).json()
else:
print("Not able to retrive records for rev_id:" + str(rev_id))
return "NaN"
def get_predictions_from_df(df, batch_size=50):
rev_ids = df['rev_id'].tolist()
result = {}
for i in tqdm(range(len(rev_ids)//batch_size + 1)):
rev_id_batch = rev_ids[i * batch_size: min((i + 1) * batch_size, len(rev_ids))]
rev_id_batch = ('|').join(map(str,rev_id_batch))
responses = get_prediction(rev_id_batch)['enwiki']['scores']
result.update(responses)
with open("../data-cleaned/api_responses.pickle","wb") as file:
pickle.dump(result, file)
return result
###Output
_____no_output_____
###Markdown
Toggle `run_queries` to call the REST API for predictions, if not we will load the file existing in the repository
###Code
run_queries = True
if run_queries:
results = get_predictions_from_df(politicians_dataset_cleaned)
else:
with open("../data-cleaned/api_responses.pickle","rb") as file:
results = pickle.load(file)
###Output
100%|██████████| 935/935 [13:56<00:00, 1.02s/it]
###Markdown
Here we extract the probability from the responses and join it with the politicians dataset
###Code
def extract_probability(rev_id):
rev_id_str = str(rev_id)
if 'score' in results[rev_id_str]['articlequality']:
return results[rev_id_str]['articlequality']['score']['prediction']
else:
return "NaN"
politicians_dataset_cleaned['prediction'] = politicians_dataset_cleaned['rev_id'].apply(extract_probability)
politicians_dataset_cleaned['prediction'].unique()
###Output
_____no_output_____
###Markdown
Logging the files that have no prediction, then filtering them out
###Code
print("These are the pages that does not have predictions:")
politicians_dataset_cleaned[politicians_dataset_cleaned['prediction'] == 'NaN']
politicians_dataset_cleaned = politicians_dataset_cleaned[politicians_dataset_cleaned['prediction'] != 'NaN']
politicians_dataset_cleaned['prediction'].unique()
###Output
_____no_output_____
###Markdown
Step 4: Combining datasets
###Code
joined_df = pd.merge(
politicians_dataset_cleaned,
world_population_dataset_cleaned_country,
left_on=['country'],
right_on=['Name'],
how='outer',
indicator=True
)
joined_df = joined_df.rename(columns = {
"page" : "article_name",
"rev_id":"revision_id",
"prediction":"article_quality_est.",
"Population":"population"
})[['country', 'article_name', 'revision_id', 'article_quality_est.', 'population', '_merge']]
###Output
_____no_output_____
###Markdown
Get and save entries that have no match to `data-combined/wp_wpds_countries-no_match.csv`
###Code
no_match = joined_df[joined_df['_merge'] != 'both'].drop(columns=['_merge'])
no_match.to_csv('../data-combined/wp_wpds_countries-no_match.csv')
###Output
_____no_output_____
###Markdown
Get and save rest of the entries to `data-combined/wp_wpds_politicians_by_country.csv`
###Code
joined_df = joined_df[joined_df['_merge'] == 'both'].drop(columns=['_merge'])
joined_df.to_csv('../data-combined/wp_wpds_politicians_by_country.csv')
###Output
_____no_output_____
###Markdown
Step 5: Analysis In this step we will calculate our metrics used for analysis:* The proportion (as a percentage) of articles-per-population and high-quality articles for each country* The proportion (as a percentage) of articles-per-population and high-quality articles for each geographic regionIn order to do that, we will create 4 dataframes:* `country_high_quality_proportion`: The proportion (as a percentage) of high-quality articles for each country* `country_all_articles_proportion`: The proportion (as a percentage) of articles-per-population for each country* `region_high_quality_proportion`: The proportion (as a percentage) of high-quality articles for each geographic region* `region_all_article_proportion`: The proportion (as a percentage) of articles-per-population for each geographic region
###Code
high_quality_df = joined_df[(joined_df['article_quality_est.'] == 'FA') | (joined_df['article_quality_est.'] == 'GA')]
country_population = joined_df[['country', 'population']].drop_duplicates()
country_high_quality_articles_count = high_quality_df.groupby(['country']).size().reset_index(name='high_quailty_articles')
country_all_articles_count = joined_df.groupby(['country']).size().reset_index(name='all_articles')
###Output
_____no_output_____
###Markdown
Calculating the proportion of articles-per-population and high-quality articles for each country
###Code
country_high_quality_proportion = pd.merge(
country_population,
country_high_quality_articles_count,
on=['country'],
how='left'
).fillna(0)
country_high_quality_proportion['high_quality_proportion'] = country_high_quality_proportion['high_quailty_articles'] / country_high_quality_proportion['population']
###Output
_____no_output_____
###Markdown
Calculating the proportion of articles-per-population for each country
###Code
country_all_articles_proportion = pd.merge(
country_population,
country_all_articles_count,
on=['country'],
how='left'
).fillna(0)
country_all_articles_proportion['all_article_proportion'] = country_all_articles_proportion['all_articles'] / country_high_quality_proportion['population']
###Output
_____no_output_____
###Markdown
Here we calculate the country to sub-region mapping
###Code
regions = world_population_dataset_cleaned_sub_region['Name'].tolist()
def find_region(country, population_df=world_population_dataset):
index = population_df.index[population_df['Name'] == country].tolist()[0]
while population_df.iloc[index]['Name'] not in regions:
index -= 1
return population_df.iloc[index]['Name']
# Create a new column for sub region
distinct_countries = joined_df[['country']].drop_duplicates()
distinct_countries['region'] = distinct_countries['country'].apply(find_region)
joined_df_with_region = pd.merge(
joined_df,
distinct_countries,
on=['country'],
how='left'
)
# calculate region population
region_population = joined_df_with_region[['country', 'region', 'population']].drop_duplicates().groupby(['region']).sum()
region_high_quality_df = joined_df_with_region[(joined_df_with_region['article_quality_est.'] == 'FA') | (joined_df_with_region['article_quality_est.'] == 'GA')]
region_high_quality_articles_count = region_high_quality_df.groupby(['region']).size().reset_index(name='high_quailty_articles')
region_all_articles_count = joined_df_with_region.groupby(['region']).size().reset_index(name='all_articles')
###Output
_____no_output_____
###Markdown
Calculating the proportion of articles-per-population and high-quality articles for each region
###Code
region_high_quality_proportion = pd.merge(
region_population,
region_high_quality_articles_count,
on=['region'],
how='left'
).fillna(0)
region_high_quality_proportion['high_quality_proportion'] = region_high_quality_proportion['high_quailty_articles'] / region_high_quality_proportion['population']
region_all_article_proportion = pd.merge(
region_population,
region_all_articles_count,
on=['region'],
how='left'
).fillna(0)
region_all_article_proportion['all_articles_proportion'] = region_all_article_proportion['all_articles'] / region_all_article_proportion['population']
###Output
_____no_output_____
###Markdown
Step 6: Results 1. Top 10 countries by coverage: 10 highest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
country_all_articles_proportion.sort_values(by=['all_article_proportion'], ascending=False)[0:10]
###Output
_____no_output_____
###Markdown
2. Bottom 10 countries by coverage: 10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
###Code
country_all_articles_proportion.sort_values(by=['all_article_proportion'])[0:10]
###Output
_____no_output_____
###Markdown
3. Top 10 countries by relative quality: 10 highest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
country_high_quality_proportion.sort_values(by=['high_quality_proportion'], ascending=False)[0:10]
###Output
_____no_output_____
###Markdown
4. Bottom 10 countries by relative quality: 10 lowest-ranked countries in terms of the relative proportion of politician articles that are of GA and FA-quality
###Code
country_high_quality_proportion.sort_values(by=['high_quality_proportion'])[0:10]
###Output
_____no_output_____
###Markdown
5. Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the total count of politician articles from countries in each region as a proportion of total regional population
###Code
region_all_article_proportion.sort_values(by=['all_articles_proportion'], ascending=False)
###Output
_____no_output_____
###Markdown
6. Geographic regions by coverage: Ranking of geographic regions (in descending order) in terms of the relative proportion of politician articles from countries in each region that are of GA and FA-quality
###Code
region_high_quality_proportion.sort_values(by=['high_quality_proportion'], ascending=False)
###Output
_____no_output_____
###Markdown
Reflections and Implications My initial thoughts when I saw this analysis approach is whether the number of well known politicians, or at least well documented politicians, has a correlation with population or not. This is under the assumption that only politicians that are well known enough will have authors write about them on wikipedia, regardless the quality. It is not intuitive that the more population a country has, the more politicians they will have. There are other factors that could impact the number of politicians more for each country, such as the historiy length (age) of the country, their government structure, etc. For example, countries that have extensive history like France, UK will have far more well known politicians than newer countries, such as Singapore. This can tell another story than the population.Therefore the bias that I was expecting before the analysis are:* The proportion of politician articles per population will be biased towards countries that have low population, i.e. the lower populated countries will have higher rates.* Countries with longer and richer history will have higher politician articles per population rate.* Countries that have higher attention in the world will have higher high quality article rate.* Countries that are likely to have more writers (UK, US) will have higher high quality article rate.In the end the results proves the first point to be true, but none of the other 3 points were reflected through the analysis. The countries that came up top in the list of both all article and high quality article per population rates are countries that had very low population. The difference in population dominated the rates, and made the actual article counts not significant. This can be shown since Tuvalu only had 54 articles but came to the top of the list with highest rates, while India and China had around 1000 articles were the top on the table for lowest rates.I will try to explain why the other 3 expected biases did not occur from the analysis. The population became such a dominant factor that other factors might not be as strong. For example we do see a big count of articles for France, Australia, India, China. These are the countries that had the most article count, and we see a high correlation with the history length and richness of these countries. The forth point is also true when we look at the list of countries that have highest high quality articles, UK and US came in top 2. I have attached the 2 tables below.In addition to validating my assumptions, I also found that there is bias in English speaking regions vs non English speaking regions. Both the article count per population and high article count per population for each region, Europe and North America stood out, where non-English speaking regions fell to the bottom, like Asia and Africa. This makes sense since more authors will focus their attention in their own language-speaking countries, and are more likely to write about politicians in their own countries.I do think if any business or research would use the assumption made in this analysis that tries to tie population with the count of articles, regardless quality, they will end up with findings that are heavily biased towards the population. Countries or regions with lower population will have significantly higher attention since the population is such a dominant factor. If they really want to make a case out of this, then I would suggest reducing the impact of the population by taking a log or try to log-normalize the population.
###Code
country_all_articles_proportion.sort_values(by=['all_articles'], ascending=False)[0:10]
country_high_quality_proportion.sort_values(by=['high_quailty_articles'], ascending=False)[0:10]
###Output
_____no_output_____
###Markdown
A2 - BiasBy: Benjamin Brodeur Mathieu Date: 10/05/2019 OverviewThe goal of this assignment is to reflect on sources of bias by analyzing coverage and relative article quality by country and geographical regions of politicians articles taken from the English Wikipedia. Step 1: Data acquisitionThe data for this analysis comes from:1. [The Wikipedia politicians by country dataset](https://figshare.com/articles/Untitled_Item/5513449)2. [Population resource bureau, mid-2018 population by country](https://www.prb.org/international/indicator/population/table/) and is located in the `raw_data` folder. See the repository's README.md file for additional details. Step 2: Cleaning the dataFirst we will import a few libraries needed for our analysis.The `pandas` library will be used for loading and manipulating the data.> `pandas` uses the `numpy` library behind the scenes to handle multidimensional arrays efficiently. We will import this library as well to help with specific manipulations later on.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
We load the data csv files from the `raw_data` folder and output the first few rows of each to make sure they were loaded correctly.
###Code
politicians_by_country = pd.read_csv('../raw_data/page_data.csv')
politicians_by_country.head(2)
population_by_geography = pd.read_csv('../raw_data/WPDS_2018_data.csv', thousands=',')
population_by_geography.head(2)
###Output
_____no_output_____
###Markdown
To simplify the use of the `population_by_geography` table we will rename its columns `geo` and `pop`.
###Code
population_by_geography.columns = ['geo', 'pop']
population_by_geography.head(2)
###Output
_____no_output_____
###Markdown
We can see that some rows of the `politicians_by_country` dataframe's `page` column contains the "Template:" prefix. These pages are not Wikipedia articles and will be removed below.
###Code
# ~ is used as the standard ! (negation operator)
template_prefix_filter = ~politicians_by_country.page.str.startswith('Template:')
politicians_by_country = politicians_by_country[template_prefix_filter]
politicians_by_country.head(3)
###Output
_____no_output_____
###Markdown
The `population_by_geography` contains some cumulative regional (i.e. AFRICA, OCEANIA) population counts. Regions are ALL CAPS values in the `geo` column. These rows won't match with the country field of our `politicians_by_country` table, so we will remove them to form the `population_by_country` table and keep the other rows.
###Code
# Only regions are in ALL CAPS
region_filter = population_by_geography.geo.str.isupper()
population_by_country = population_by_geography[~region_filter]
population_by_country.columns = ['country', 'pop']
population_by_country.head(3)
###Output
_____no_output_____
###Markdown
Step 3: Getting article quality predictionsWe will be gathering quality predictions data from the [ORES](https://www.mediawiki.org/wiki/ORES) (Objective Revision Evaluation Servie) machine learning system.The code in the cell below was provided as sample code to use with the ores package.
###Code
from ores import api
# We provide this useragent string (second arg below) to help the ORES team track requests
ores_session = api.Session("https://ores.wikimedia.org", "Class project: [email protected]")
# Fetch the article quality using the rev_id values
results = ores_session.score("enwiki", ["articlequality"], politicians_by_country.rev_id.values)
###Output
_____no_output_____
###Markdown
For each article in the result we obtain the prediction and place them in an array. If the prediction was not available we instead use a `no_prediction_token` as value.
###Code
article_quality_col = []
no_prediction_token = 'NOT_FOUND'
for score in results:
found_prediction = False
# Is a prediction in the score object ?
if 'articlequality' in score:
if 'score' in score['articlequality']:
if 'prediction' in score['articlequality']['score']:
article_quality_col.append(score['articlequality']['score']['prediction'])
found_prediction = True
# No predictions were found
if not found_prediction:
article_quality_col.append(no_prediction_token)
# Output the first five values to validate
article_quality_col[0:5]
###Output
_____no_output_____
###Markdown
We add the newly extracted article_quality column to the politicians_by_country dataframe.
###Code
politicians_by_country['article_quality'] = article_quality_col
politicians_by_country.head(2)
###Output
_____no_output_____
###Markdown
We save the articles whose ratings weren't found in a file named `ores_not_found.csv` in the artifacts folder. We will used them later in the analysis phase.For now, we remove these values from our `politicians_by_country` table.
###Code
not_found_articles_filter = (politicians_by_country['article_quality'] == 'NOT_FOUND')
not_found_articles = politicians_by_country.loc[not_found_articles_filter]
# We do not need to include the article_quality column as it was not available
not_found_articles.drop(columns=['article_quality'])
not_found_articles.to_csv('../artifacts/data/ores_not_found.csv', index=None, header=True)
# Politicians by country now only has rated articles
politicians_by_country = politicians_by_country[~not_found_articles_filter]
###Output
_____no_output_____
###Markdown
Step 4: Combining datasetsNow that our article data in the `politicians_by_country` table has the quality rating for each article, we will merge it with our population_by_country into one table. We also rename our columns for readability going forward.
###Code
# pandas' merge is the equivalent of the sql join statement
# the how parameter indicates the type of merge
# outer indicates a "full outer join"
articles_and_population = pd.merge(politicians_by_country, population_by_country, on='country', how='outer')
articles_and_population.columns = ['article_name', 'country', 'revision_id', 'article_quality', 'population']
articles_and_population.head()
###Output
_____no_output_____
###Markdown
Some rows will not have had a match with the other table.* We want to keep a record of rows for which there was not `pop` value (NaN in the table) which indicates no match from the population_by_country table.* We also want to keep rows for which the other fields (such as rev_id) are missing (NaN) which indicates no match from the politicians_by_country table.
###Code
no_population_match_rows = articles_and_population[articles_and_population['population'].isnull()]
no_revision_id_match_rows = articles_and_population[articles_and_population['revision_id'].isnull()]
no_match_df = no_population_match_rows.append(no_revision_id_match_rows)
###Output
_____no_output_____
###Markdown
We will now create a file with the complete and incomplete rows.
###Code
articles_and_population = articles_and_population.drop(no_match_df.index)
no_match_df.to_csv('../clean_data/wp_wpds_countries_no_match.csv', index=None, header=True)
articles_and_population.to_csv('../clean_data/wp_wpds_politicians_by_country.csv', index=None, header=True)
###Output
_____no_output_____
###Markdown
Step 5: AnalysisWe start by loading the cleaned data.
###Code
# We use the Thousands=',' token to specify that the population column has thousands delimted by commas
articles_and_population = pd.read_csv('../clean_data/wp_wpds_politicians_by_country.csv', thousands=',')
articles_and_population.head()
###Output
_____no_output_____
###Markdown
Our analysis will focus on:| Area | Description ||---|---|| Coverage | The number of politician articles as a proportion of the country's population || Relative article quality |The proportion of the number of "FA" (featured article) or "GA" (good article) over the number of articles |We are interested in getting those metrics by regions and countries. We will use our original data source to associate a country with its region and add this to our dataset.
###Code
# Drop the population from our original dataset
geography = population_by_geography.drop(columns=['pop'])
# iterate over indexes in geography and create dictionary of countries (key) to their region (value).
# The original dataset has region in ALL_CAPS first followed by all countries in that region.
country_to_region_lookup = {}
region = ''
for i in geography.index:
country_or_region = geography.loc[i, 'geo']
# Is the 'geo' field of this row a region?
if country_or_region.isupper():
# Assign region for all countries until the next region
region = country_or_region
else:
# Assign current region to country
country_to_region_lookup[country_or_region] = region
# iterate over the articles dataset using the lookup to assign a region
# to each row based on the value of the country field
regions = []
for i in articles_and_population.index:
country = articles_and_population.loc[i, 'country']
regions.append(country_to_region_lookup[country])
# Assign region column
articles_and_population['region'] = regions
# Display as validation
articles_and_population.head(3)
###Output
_____no_output_____
###Markdown
Coverage calculation By countryOur analysis will first focus on 'coverage' which we will calculate in terms of number of politician articles as a proportion of the country's population.First we create a table of the number of the article_count and population by country.
###Code
# np.mean gives the mean for each group, np.size gives us the row_count (in this case the article count)
coverage_by_country = articles_and_population.groupby('country').agg({'population': np.mean, 'article_name': np.size})
coverage_by_country.columns = ['population', 'article_count']
coverage_by_country.head(2)
###Output
_____no_output_____
###Markdown
We calculate coverage in its own row and sort the table to obtain the top and bottom 10 countries for coverage.
###Code
# Reminder we mulitply by 1e6 as the population is in millions
coverage_by_country['coverage'] = (coverage_by_country.article_count/(coverage_by_country.population*1e6))
# Sort by coverage percentage descending and take 10
top_10_by_country = coverage_by_country.sort_values(by=['coverage'], ascending=False).head(10)
# Sort by coverage percentage ascending and take 10
bottom_10_by_country = coverage_by_country.sort_values(by=['coverage']).head(10)
###Output
_____no_output_____
###Markdown
By regionWe'd like to do a similar excersise to see what the coverage by geographical region will be.
###Code
# Group data by region counting the number of articles
articles_by_region = articles_and_population.groupby('region').agg({'article_name': np.size})
# Rename columns for article_count
articles_by_region.columns = ['article_count']
# Get population by region from the orginal table (population_by_geography)
coverage_by_region = pd.merge(articles_by_region, population_by_geography, left_on='region', right_on='geo', how='inner')
# Rename the 'pop' column
coverage_by_region = coverage_by_region.rename(columns={"pop": "population"})
# Calculate coverage (population is in millions)
coverage_by_region['coverage'] = (coverage_by_region['article_count']/(coverage_by_region.population*1e6))
# Output sorted by coverage percentage descending
coverage_by_region = coverage_by_region.sort_values(by=['coverage'], ascending=False)
# Output friendly names
coverage_by_region = coverage_by_region.rename(columns={'geo': 'region'})
coverage_by_region = coverage_by_region[['region', 'population', 'article_count', 'coverage']]
###Output
_____no_output_____
###Markdown
Coverage tables discussion
###Code
# Display logic inspired by: https://stackoverflow.com/questions/38783027/jupyter-notebook-display-two-pandas-tables-side-by-side
from IPython.display import display_html
top_10_by_country_styler = top_10_by_country.style.set_table_attributes("style='display:inline'").set_caption('Top 10').format({'coverage' : '{:.3%}'})
bottom_10_by_country_styler = bottom_10_by_country.style.set_table_attributes("style='display:inline;margin-left:40px'").set_caption('Bottom 10').format({'coverage' : '{:.5%}'})
region_styler = coverage_by_region.style.set_table_attributes("style='display:block'").set_caption('Regions').format({'coverage' : '{:.5%}'})
display_html(top_10_by_country_styler._repr_html_()+bottom_10_by_country_styler._repr_html_()+region_styler._repr_html_(), raw=True)
###Output
_____no_output_____
###Markdown
> Note population is in millions ObservationsWe notice that the countries in the "top 10 coverage" table all have fairly small populations. This is expected as having a good coverage in countries with bigger populations would require a significant amount of articles. This is reflected in the bottom 10 table which all have populations over 30 million.Both the "top 10" and "bottom 10" countries official languages are not english. This is interesting given that articles were fetch from the English Wikipedia.Coverage is calculated by counting the number of articles about politicians over a country's population. This does not take into account the historical context of the country nor their political system(s). Some countries may have much richer history records, political systems that involve more people etc.In the region table we can see some of the observations above come into play:- The population count seems to vaguely dictate the overall order- Northern America has a small number of articles for its population, but may also have some of the shortest reported historical period.- Many other factors such as the distribution of wikipedia's english countries could explain some of the discrepancies between regions. Relative qualityOur analysis will now focus on 'relative quality' which we will calculate as a proportion of the number of articles with a rating of "FA" or "GA" over the total number of articles. By country
###Code
# Create custom aggregator to count the number of "FA" and "GA" articles
def count_quality_articles(series):
great_articles_count = 0
for val in series:
if val == 'FA' or val == 'GA':
great_articles_count = great_articles_count + 1
return great_articles_count
# Group data by country
relative_quality_by_country = articles_and_population.groupby('country').agg({'article_name': np.size, 'article_quality': count_quality_articles})
# Rename columns for article_count
relative_quality_by_country.columns = ['article_count', 'quality_article_count']
# Calculate relative_quality
relative_quality_by_country['relative_quality'] = (relative_quality_by_country['quality_article_count']/relative_quality_by_country['article_count'])
# Grab top 10
top_10_relative_quality_by_country = relative_quality_by_country.sort_values(by=['relative_quality'], ascending=False).head(10)
# Grab bottom 10
bottom_10_relative_quality_by_country = relative_quality_by_country.sort_values(by=['relative_quality']).head(10)
###Output
_____no_output_____
###Markdown
By region
###Code
# Group data by region
relative_quality_by_region = articles_and_population.groupby('region').agg({'article_name': np.size, 'article_quality': count_quality_articles})
# Rename columns for article_count
relative_quality_by_region.columns = ['article_count', 'quality_article_count']
# Calculate relative_quality
relative_quality_by_region['relative_quality'] = (relative_quality_by_region['quality_article_count']/relative_quality_by_region['article_count'])
# Output by relative_quality descending
relative_quality_by_region = relative_quality_by_region.sort_values(by=['relative_quality'], ascending=False)
###Output
_____no_output_____
###Markdown
Relative quality tables
###Code
# Display logic inspired by: https://stackoverflow.com/questions/38783027/jupyter-notebook-display-two-pandas-tables-side-by-side
top_10_relative_quality_by_country_styler = top_10_relative_quality_by_country.style.set_table_attributes("style='display:inline'").set_caption('Top 10').format({'relative_quality' : '{:.3%}'})
bottom_10_relative_quality_by_country_styler = bottom_10_relative_quality_by_country.style.set_table_attributes("style='display:inline;margin-left:40px'").set_caption('Bottom 10').format({'relative_quality' : '{:.5%}'})
region_styler = relative_quality_by_region.style.set_table_attributes("style='display:block'").set_caption('Regions').format({'relative_quality' : '{:.5%}'})
display_html(top_10_relative_quality_by_country_styler._repr_html_()+bottom_10_relative_quality_by_country_styler._repr_html_()+region_styler._repr_html_(), raw=True)
###Output
_____no_output_____ |
code/exploratory/simulate_library.ipynb | ###Markdown
Simulate library (c) 2020 Tom Röschinger. This work is licensed under a [Creative Commons Attribution License CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/). All code contained herein is licensed under an [MIT license](https://opensource.org/licenses/MIT).
###Code
import numpy as np
import random
import string
import pandas as pd
import matplotlib.pyplot as plt
import cmdstanpy
import arviz as az
import bebi103
import numba
import wgregseq
%load_ext autoreload
%autoreload 2
wgregseq.plotting_style()
%matplotlib inline
# Get svg graphics from the notebook
%config InlineBackend.figure_format = 'svg'
import bokeh.io
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
In this notebook we will try to simulate the expression values for a dataset when the energy matrix is given. The purpose is that we want to use these datasets to try and identify the binding sites. We want to see for which dataset we can recover the underlying architecture. The log-likelihood function is the mutual information. So I expect that we can simply draw from this likelihood when both energy matrix and binding sites are given. As input we will use a energy matrix from Reg-Seq. Therefore we first load in the wild type sequence, here for *ykge*.
###Code
seqs = pd.read_csv("../../data/RegSeq/wtsequences.csv", index_col=0)
seqs.head()
sequence = seqs.loc[seqs["name"]=="ykgE", "geneseq"].values[0]
sequence
###Output
_____no_output_____
###Markdown
Now we load in the energy matrix (and do some cosmetics).
###Code
emat = pd.read_csv("../../data/RegSeq/ykgEarabinosedataset_alldone_with_largeMCMC194", delim_whitespace=True)[['val_A', 'val_C', 'val_G', 'val_T']]
emat.rename(columns={"val_A": "A", "val_C": "C", "val_G": "G", "val_T": "T"}, inplace=True)
emat.head()
###Output
_____no_output_____
###Markdown
Let's have a look at the sum of all mutation effects per position.
###Code
info = wgregseq.emat_to_information("../../data/RegSeq/ykgEarabinosedataset_alldone_with_largeMCMC194", sequence, old_format=True)
fig, ax = plt.subplots(figsize=(10,2))
plt.bar(range(len(info)), info)
###Output
_____no_output_____
###Markdown
We can easily compute the sum of all entries for the wild type.
###Code
wgregseq.sum_emat(sequence, emat)
###Output
_____no_output_____
###Markdown
Let's create some scrambles.
###Code
scrambles = wgregseq.create_scrambles_df(sequence, 10, 5, 100, number=50)
scrambles
###Output
_____no_output_____
###Markdown
And compute the energies for the scrambles.
###Code
scrambles["effect"] = scrambles["sequence"].apply(wgregseq.sum_emat, args=(emat,))
scrambles.head()
###Output
_____no_output_____ |
SmartFireAlarm/Jupyter/Visualize Data.ipynb | ###Markdown
VISUALIZE DATA
(collections and DataFrame)
###Code
#r "nuget:Microsoft.ML,1.5.2"
using XPlot.Plotly;
using Microsoft.ML;
using Microsoft.ML.Data;
Microsoft.ML.MLContext mlContext = new Microsoft.ML.MLContext(seed: 1);
###Output
_____no_output_____
###Markdown
1. Load Data
###Code
#load "C:\Users\dcost\source\repos\SmartFireAlarm\SmartFireAlarm\Jupyter\Models.csx"
const string DATASET_PATH = "./sensors_data.csv";
IDataView data = mlContext.Data.LoadFromTextFile<ModelInput>(
path: DATASET_PATH,
hasHeader: true,
separatorChar: ',');
###Output
_____no_output_____
###Markdown
2. DataFrame
(explore data with Microsoft.Data.Analysis)
###Code
#r "nuget:Microsoft.Data.Analysis"
using Microsoft.AspNetCore.Html;
using Microsoft.Data.Analysis;
###Output
_____no_output_____
###Markdown
Load data into data frame
###Code
//const string DATASET_PATH = "./taxi.csv";
const string DATASET_PATH = "./sensors_data.csv";
var dataFrame = DataFrame.LoadCsv(DATASET_PATH);
//#r "nuget:ApexCode.Interactive.Formatting,0.0.1-beta.5"
//using ApexCode.Interactive.Formatting;
#load "C:\Users\dcost\source\repos\SmartFireAlarm\SmartFireAlarm\Jupyter\Formatters.csx"
Formatters.Categories = new string[] { "FlashLight", "Infrared", "Day", "Lighter" };
Formatters.Register<DataFrame>();
dataFrame
dataFrame.Description()
###Output
_____no_output_____ |
Sequence Models/Building a Recurrent Neural Network Step by Step v3.ipynb | ###Markdown
Building your Recurrent Neural Network - Step by StepWelcome to Course 5's first assignment! In this assignment, you will implement your first Recurrent Neural Network in numpy.Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have "memory". They can read inputs $x^{\langle t \rangle}$ (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a uni-directional RNN to take information from the past to process later inputs. A bidirection RNN can take context from both the past and the future. **Notation**:- Superscript $[l]$ denotes an object associated with the $l^{th}$ layer. - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.- Superscript $(i)$ denotes an object associated with the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example input.- Superscript $\langle t \rangle$ denotes an object at the $t^{th}$ time-step. - Example: $x^{\langle t \rangle}$ is the input x at the $t^{th}$ time-step. $x^{(i)\langle t \rangle}$ is the input at the $t^{th}$ timestep of example $i$. - Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$.We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started! Let's first import all the packages that you will need during this assignment.
###Code
import numpy as np
from rnn_utils import *
###Output
_____no_output_____
###Markdown
1 - Forward propagation for the basic Recurrent Neural NetworkLater this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, $T_x = T_y$. **Figure 1**: Basic RNN model Here's how you can implement an RNN: **Steps**:1. Implement the calculations needed for one time-step of the RNN.2. Implement a loop over $T_x$ time-steps in order to process all the inputs, one at a time. Let's go! 1.1 - RNN cellA Recurrent neural network can be seen as the repetition of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell. **Figure 2**: Basic RNN cell. Takes as input $x^{\langle t \rangle}$ (current input) and $a^{\langle t - 1\rangle}$ (previous hidden state containing information from the past), and outputs $a^{\langle t \rangle}$ which is given to the next RNN cell and also used to predict $y^{\langle t \rangle}$ **Exercise**: Implement the RNN-cell described in Figure (2).**Instructions**:1. Compute the hidden state with tanh activation: $a^{\langle t \rangle} = \tanh(W_{aa} a^{\langle t-1 \rangle} + W_{ax} x^{\langle t \rangle} + b_a)$.2. Using your new hidden state $a^{\langle t \rangle}$, compute the prediction $\hat{y}^{\langle t \rangle} = softmax(W_{ya} a^{\langle t \rangle} + b_y)$. We provided you a function: `softmax`.3. Store $(a^{\langle t \rangle}, a^{\langle t-1 \rangle}, x^{\langle t \rangle}, parameters)$ in cache4. Return $a^{\langle t \rangle}$ , $y^{\langle t \rangle}$ and cacheWe will vectorize over $m$ examples. Thus, $x^{\langle t \rangle}$ will have dimension $(n_x,m)$, and $a^{\langle t \rangle}$ will have dimension $(n_a,m)$.
###Code
# GRADED FUNCTION: rnn_cell_forward
def rnn_cell_forward(xt, a_prev, parameters):
"""
Implements a single forward step of the RNN-cell as described in Figure (2)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)
"""
# Retrieve parameters from "parameters"
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ### (≈2 lines)
# compute next activation state using the formula given above
a_next = np.tanh((np.dot(Waa,a_prev)+np.dot(Wax,xt)+ba))
# compute output of the current cell using the formula given above
yt_pred = softmax(np.dot(Wya,a_next)+by)
### END CODE HERE ###
# store values you need for backward propagation in cache
cache = (a_next, a_prev, xt, parameters)
return a_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a_next, yt_pred, cache = rnn_cell_forward(xt, a_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", a_next.shape)
print("yt_pred[1] =", yt_pred[1])
print("yt_pred.shape = ", yt_pred.shape)
###Output
a_next[4] = [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978
-0.18887155 0.99815551 0.6531151 0.82872037]
a_next.shape = (5, 10)
yt_pred[1] = [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212
0.36920224 0.9966312 0.9982559 0.17746526]
yt_pred.shape = (2, 10)
###Markdown
**Expected Output**: **a_next[4]**: [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978 -0.18887155 0.99815551 0.6531151 0.82872037] **a_next.shape**: (5, 10) **yt[1]**: [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212 0.36920224 0.9966312 0.9982559 0.17746526] **yt.shape**: (2, 10) 1.2 - RNN forward pass You can see an RNN as the repetition of the cell you've just built. If your input sequence of data is carried over 10 time steps, then you will copy the RNN cell 10 times. Each cell takes as input the hidden state from the previous cell ($a^{\langle t-1 \rangle}$) and the current time-step's input data ($x^{\langle t \rangle}$). It outputs a hidden state ($a^{\langle t \rangle}$) and a prediction ($y^{\langle t \rangle}$) for this time-step. **Figure 3**: Basic RNN. The input sequence $x = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is carried over $T_x$ time steps. The network outputs $y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$. **Exercise**: Code the forward propagation of the RNN described in Figure (3).**Instructions**:1. Create a vector of zeros ($a$) that will store all the hidden states computed by the RNN.2. Initialize the "next" hidden state as $a_0$ (initial hidden state).3. Start looping over each time step, your incremental index is $t$ : - Update the "next" hidden state and the cache by running `rnn_cell_forward` - Store the "next" hidden state in $a$ ($t^{th}$ position) - Store the prediction in y - Add the cache to the list of caches4. Return $a$, $y$ and caches
###Code
# GRADED FUNCTION: rnn_forward
def rnn_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of caches, x)
"""
# Initialize "caches" which will contain the list of all caches
caches = []
# Retrieve dimensions from shapes of x and parameters["Wya"]
n_x, m, T_x = x.shape
n_y, n_a = parameters["Wya"].shape
### START CODE HERE ###
# initialize "a" and "y" with zeros (≈2 lines)
a = np.zeros((n_a,m,T_x))
y_pred = np.zeros((n_y,m,T_x))
# Initialize a_next (≈1 line)
a_next = a0
# loop over all time-steps
for t in range(T_x):
# Update next hidden state, compute the prediction, get the cache (≈1 line)
a_next, yt_pred, cache = rnn_cell_forward(x[:,:,t], a_next, parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y_pred[:,:,t] = yt_pred
# Append "cache" to "caches" (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y_pred, caches
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a, y_pred, caches = rnn_forward(x, a0, parameters)
print("a[4][1] = ", a[4][1])
print("a.shape = ", a.shape)
print("y_pred[1][3] =", y_pred[1][3])
print("y_pred.shape = ", y_pred.shape)
print("caches[1][1][3] =", caches[1][1][3])
print("len(caches) = ", len(caches))
###Output
a[4][1] = [-0.99999375 0.77911235 -0.99861469 -0.99833267]
a.shape = (5, 10, 4)
y_pred[1][3] = [ 0.79560373 0.86224861 0.11118257 0.81515947]
y_pred.shape = (2, 10, 4)
caches[1][1][3] = [-1.1425182 -0.34934272 -0.20889423 0.58662319]
len(caches) = 2
###Markdown
**Expected Output**: **a[4][1]**: [-0.99999375 0.77911235 -0.99861469 -0.99833267] **a.shape**: (5, 10, 4) **y[1][3]**: [ 0.79560373 0.86224861 0.11118257 0.81515947] **y.shape**: (2, 10, 4) **cache[1][1][3]**: [-1.1425182 -0.34934272 -0.20889423 0.58662319] **len(cache)**: 2 Congratulations! You've successfully built the forward propagation of a recurrent neural network from scratch. This will work well enough for some applications, but it suffers from vanishing gradient problems. So it works best when each output $y^{\langle t \rangle}$ can be estimated using mainly "local" context (meaning information from inputs $x^{\langle t' \rangle}$ where $t'$ is not too far from $t$). In the next part, you will build a more complex LSTM model, which is better at addressing vanishing gradients. The LSTM will be better able to remember a piece of information and keep it saved for many timesteps. 2 - Long Short-Term Memory (LSTM) networkThis following figure shows the operations of an LSTM-cell. **Figure 4**: LSTM-cell. This tracks and updates a "cell state" or memory variable $c^{\langle t \rangle}$ at every time-step, which can be different from $a^{\langle t \rangle}$. Similar to the RNN example above, you will start by implementing the LSTM cell for a single time-step. Then you can iteratively call it from inside a for-loop to have it process an input with $T_x$ time-steps. About the gates - Forget gateFor the sake of this illustration, lets assume we are reading words in a piece of text, and want use an LSTM to keep track of grammatical structures, such as whether the subject is singular or plural. If the subject changes from a singular word to a plural word, we need to find a way to get rid of our previously stored memory value of the singular/plural state. In an LSTM, the forget gate lets us do this: $$\Gamma_f^{\langle t \rangle} = \sigma(W_f[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_f)\tag{1} $$Here, $W_f$ are weights that govern the forget gate's behavior. We concatenate $[a^{\langle t-1 \rangle}, x^{\langle t \rangle}]$ and multiply by $W_f$. The equation above results in a vector $\Gamma_f^{\langle t \rangle}$ with values between 0 and 1. This forget gate vector will be multiplied element-wise by the previous cell state $c^{\langle t-1 \rangle}$. So if one of the values of $\Gamma_f^{\langle t \rangle}$ is 0 (or close to 0) then it means that the LSTM should remove that piece of information (e.g. the singular subject) in the corresponding component of $c^{\langle t-1 \rangle}$. If one of the values is 1, then it will keep the information. - Update gateOnce we forget that the subject being discussed is singular, we need to find a way to update it to reflect that the new subject is now plural. Here is the formulat for the update gate: $$\Gamma_u^{\langle t \rangle} = \sigma(W_u[a^{\langle t-1 \rangle}, x^{\{t\}}] + b_u)\tag{2} $$ Similar to the forget gate, here $\Gamma_u^{\langle t \rangle}$ is again a vector of values between 0 and 1. This will be multiplied element-wise with $\tilde{c}^{\langle t \rangle}$, in order to compute $c^{\langle t \rangle}$. - Updating the cell To update the new subject we need to create a new vector of numbers that we can add to our previous cell state. The equation we use is: $$ \tilde{c}^{\langle t \rangle} = \tanh(W_c[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_c)\tag{3} $$Finally, the new cell state is: $$ c^{\langle t \rangle} = \Gamma_f^{\langle t \rangle}* c^{\langle t-1 \rangle} + \Gamma_u^{\langle t \rangle} *\tilde{c}^{\langle t \rangle} \tag{4} $$ - Output gateTo decide which outputs we will use, we will use the following two formulas: $$ \Gamma_o^{\langle t \rangle}= \sigma(W_o[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_o)\tag{5}$$ $$ a^{\langle t \rangle} = \Gamma_o^{\langle t \rangle}* \tanh(c^{\langle t \rangle})\tag{6} $$Where in equation 5 you decide what to output using a sigmoid function and in equation 6 you multiply that by the $\tanh$ of the previous state. 2.1 - LSTM cell**Exercise**: Implement the LSTM cell described in the Figure (3).**Instructions**:1. Concatenate $a^{\langle t-1 \rangle}$ and $x^{\langle t \rangle}$ in a single matrix: $concat = \begin{bmatrix} a^{\langle t-1 \rangle} \\ x^{\langle t \rangle} \end{bmatrix}$2. Compute all the formulas 1-6. You can use `sigmoid()` (provided) and `np.tanh()`.3. Compute the prediction $y^{\langle t \rangle}$. You can use `softmax()` (provided).
###Code
# GRADED FUNCTION: lstm_cell_forward
def lstm_cell_forward(xt, a_prev, c_prev, parameters):
"""
Implement a single forward step of the LSTM-cell as described in Figure (4)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
c_next -- next memory state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)
Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilde),
c stands for the memory value
"""
# Retrieve parameters from "parameters"
Wf = parameters["Wf"]
bf = parameters["bf"]
Wi = parameters["Wi"]
bi = parameters["bi"]
Wc = parameters["Wc"]
bc = parameters["bc"]
Wo = parameters["Wo"]
bo = parameters["bo"]
Wy = parameters["Wy"]
by = parameters["by"]
# Retrieve dimensions from shapes of xt and Wy
n_x, m = xt.shape
n_y, n_a = Wy.shape
### START CODE HERE ###
# Concatenate a_prev and xt (≈3 lines)
concat = np.zeros((n_a+n_x,m))
concat[: n_a, :] = a_prev
concat[n_a :, :] = xt
# Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines)
ft = sigmoid(np.dot(Wf,concat)+bf)
it = sigmoid(np.dot(Wi,concat)+bi)
cct = np.tanh(np.dot(Wc,concat)+bc)
c_next = ft*c_prev+it*cct
ot = sigmoid(np.dot(Wo,concat)+bo)
a_next = ot*np.tanh(c_next)
# Compute prediction of the LSTM cell (≈1 line)
yt_pred = softmax(np.dot(Wy, a_next) + by)
### END CODE HERE ###
# store values needed for backward propagation in cache
cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)
return a_next, c_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", c_next.shape)
print("c_next[2] = ", c_next[2])
print("c_next.shape = ", c_next.shape)
print("yt[1] =", yt[1])
print("yt.shape = ", yt.shape)
print("cache[1][3] =", cache[1][3])
print("len(cache) = ", len(cache))
###Output
a_next[4] = [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482
0.76566531 0.34631421 -0.00215674 0.43827275]
a_next.shape = (5, 10)
c_next[2] = [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942
0.76449811 -0.0981561 -0.74348425 -0.26810932]
c_next.shape = (5, 10)
yt[1] = [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381
0.00943007 0.12666353 0.39380172 0.07828381]
yt.shape = (2, 10)
cache[1][3] = [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874
0.07651101 -1.03752894 1.41219977 -0.37647422]
len(cache) = 10
###Markdown
**Expected Output**: **a_next[4]**: [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482 0.76566531 0.34631421 -0.00215674 0.43827275] **a_next.shape**: (5, 10) **c_next[2]**: [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942 0.76449811 -0.0981561 -0.74348425 -0.26810932] **c_next.shape**: (5, 10) **yt[1]**: [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381 0.00943007 0.12666353 0.39380172 0.07828381] **yt.shape**: (2, 10) **cache[1][3]**: [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874 0.07651101 -1.03752894 1.41219977 -0.37647422] **len(cache)**: 10 2.2 - Forward pass for LSTMNow that you have implemented one step of an LSTM, you can now iterate this over this using a for-loop to process a sequence of $T_x$ inputs. **Figure 4**: LSTM over multiple time-steps. **Exercise:** Implement `lstm_forward()` to run an LSTM over $T_x$ time-steps. **Note**: $c^{\langle 0 \rangle}$ is initialized with zeros.
###Code
# GRADED FUNCTION: lstm_forward
def lstm_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)
"""
# Initialize "caches", which will track the list of all the caches
caches = []
### START CODE HERE ###
# Retrieve dimensions from shapes of x and parameters['Wy'] (≈2 lines)
n_x, m, T_x = x.shape
n_y, n_a = parameters['Wy'].shape
# initialize "a", "c" and "y" with zeros (≈3 lines)
a = np.zeros((n_a,m,T_x))
c = np.zeros((n_a,m,T_x))
y = np.zeros((n_y,m,T_x))
# Initialize a_next and c_next (≈2 lines)
a_next = a0
c_next = np.zeros((a0.shape))
# loop over all time-steps
for t in range(T_x):
# Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)
a_next, c_next, yt, cache = lstm_cell_forward(x[:,:,t], a_next, c_next, parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y[:,:,t] = yt
# Save the value of the next cell state (≈1 line)
c[:,:,t] = c_next
# Append the cache into caches (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y, c, caches
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
print("a[4][3][6] = ", a[4][3][6])
print("a.shape = ", a.shape)
print("y[1][4][3] =", y[1][4][3])
print("y.shape = ", y.shape)
print("caches[1][1[1]] =", caches[1][1][1])
print("c[1][2][1]", c[1][2][1])
print("len(caches) = ", len(caches))
###Output
a[4][3][6] = 0.172117767533
a.shape = (5, 10, 7)
y[1][4][3] = 0.95087346185
y.shape = (2, 10, 7)
caches[1][1[1]] = [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139
0.41005165]
c[1][2][1] -0.855544916718
len(caches) = 2
###Markdown
**Expected Output**: **a[4][3][6]** = 0.172117767533 **a.shape** = (5, 10, 7) **y[1][4][3]** = 0.95087346185 **y.shape** = (2, 10, 7) **caches[1][1][1]** = [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139 0.41005165] **c[1][2][1]** = -0.855544916718 **len(caches)** = 2 Congratulations! You have now implemented the forward passes for the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance. The rest of this notebook is optional, and will not be graded. 3 - Backpropagation in recurrent neural networks (OPTIONAL / UNGRADED)In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If however you are an expert in calculus and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated and we did not derive them in lecture. However, we will briefly present them below. 3.1 - Basic RNN backward passWe will start by computing the backward pass for the basic RNN-cell. **Figure 5**: RNN-cell's backward pass. Just like in a fully-connected neural network, the derivative of the cost function $J$ backpropagates through the RNN by following the chain-rule from calculas. The chain-rule is also used to calculate $(\frac{\partial J}{\partial W_{ax}},\frac{\partial J}{\partial W_{aa}},\frac{\partial J}{\partial b})$ to update the parameters $(W_{ax}, W_{aa}, b_a)$. Deriving the one step backward functions: To compute the `rnn_cell_backward` you need to compute the following equations. It is a good exercise to derive them by hand. The derivative of $\tanh$ is $1-\tanh(x)^2$. You can find the complete proof [here](https://www.wyzant.com/resources/lessons/math/calculus/derivative_proofs/tanx). Note that: $ \text{sech}(x)^2 = 1 - \tanh(x)^2$Similarly for $\frac{ \partial a^{\langle t \rangle} } {\partial W_{ax}}, \frac{ \partial a^{\langle t \rangle} } {\partial W_{aa}}, \frac{ \partial a^{\langle t \rangle} } {\partial b}$, the derivative of $\tanh(u)$ is $(1-\tanh(u)^2)du$. The final two equations also follow same rule and are derived using the $\tanh$ derivative. Note that the arrangement is done in a way to get the same dimensions to match.
###Code
def rnn_cell_backward(da_next, cache):
"""
Implements the backward pass for the RNN-cell (single time-step).
Arguments:
da_next -- Gradient of loss with respect to next hidden state
cache -- python dictionary containing useful values (output of rnn_cell_forward())
Returns:
gradients -- python dictionary containing:
dx -- Gradients of input data, of shape (n_x, m)
da_prev -- Gradients of previous hidden state, of shape (n_a, m)
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dba -- Gradients of bias vector, of shape (n_a, 1)
"""
# Retrieve values from cache
(a_next, a_prev, xt, parameters) = cache
# Retrieve values from parameters
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ###
# compute the gradient of tanh with respect to a_next (≈1 line)
dtanh = 1-(np.tanh(a_next))**2
# compute the gradient of the loss with respect to Wax (≈2 lines)
dxt =
dWax = None
# compute the gradient with respect to Waa (≈2 lines)
da_prev = None
dWaa = None
# compute the gradient with respect to b (≈1 line)
dba = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dWax": dWax, "dWaa": dWaa, "dba": dba}
return gradients
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
b = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a_next, yt, cache = rnn_cell_forward(xt, a_prev, parameters)
da_next = np.random.randn(5,10)
gradients = rnn_cell_backward(da_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
###Output
_____no_output_____
###Markdown
**Expected Output**: **gradients["dxt"][1][2]** = -0.460564103059 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = 0.0842968653807 **gradients["da_prev"].shape** = (5, 10) **gradients["dWax"][3][1]** = 0.393081873922 **gradients["dWax"].shape** = (5, 3) **gradients["dWaa"][1][2]** = -0.28483955787 **gradients["dWaa"].shape** = (5, 5) **gradients["dba"][4]** = [ 0.80517166] **gradients["dba"].shape** = (5, 1) Backward pass through the RNNComputing the gradients of the cost with respect to $a^{\langle t \rangle}$ at every time-step $t$ is useful because it is what helps the gradient backpropagate to the previous RNN-cell. To do so, you need to iterate through all the time steps starting at the end, and at each step, you increment the overall $db_a$, $dW_{aa}$, $dW_{ax}$ and you store $dx$.**Instructions**:Implement the `rnn_backward` function. Initialize the return variables with zeros first and then loop through all the time steps while calling the `rnn_cell_backward` at each time timestep, update the other variables accordingly.
###Code
def rnn_backward(da, caches):
"""
Implement the backward pass for a RNN over an entire sequence of input data.
Arguments:
da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)
caches -- tuple containing information from the forward pass (rnn_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)
da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)
dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)
dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a)
dba -- Gradient w.r.t the bias, of shape (n_a, 1)
"""
### START CODE HERE ###
# Retrieve values from the first cache (t=1) of caches (≈2 lines)
(caches, x) = None
(a1, a0, x1, parameters) = None
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = None
n_x, m = None
# initialize the gradients with the right sizes (≈6 lines)
dx = None
dWax = None
dWaa = None
dba = None
da0 = None
da_prevt = None
# Loop through all the time steps
for t in reversed(range(None)):
# Compute gradients at time step t. Choose wisely the "da_next" and the "cache" to use in the backward propagation step. (≈1 line)
gradients = None
# Retrieve derivatives from gradients (≈ 1 line)
dxt, da_prevt, dWaxt, dWaat, dbat = gradients["dxt"], gradients["da_prev"], gradients["dWax"], gradients["dWaa"], gradients["dba"]
# Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)
dx[:, :, t] = None
dWax += None
dWaa += None
dba += None
# Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line)
da0 = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWax": dWax, "dWaa": dWaa,"dba": dba}
return gradients
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a, y, caches = rnn_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = rnn_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
###Output
_____no_output_____
###Markdown
**Expected Output**: **gradients["dx"][1][2]** = [-2.07101689 -0.59255627 0.02466855 0.01483317] **gradients["dx"].shape** = (3, 10, 4) **gradients["da0"][2][3]** = -0.314942375127 **gradients["da0"].shape** = (5, 10) **gradients["dWax"][3][1]** = 11.2641044965 **gradients["dWax"].shape** = (5, 3) **gradients["dWaa"][1][2]** = 2.30333312658 **gradients["dWaa"].shape** = (5, 5) **gradients["dba"][4]** = [-0.74747722] **gradients["dba"].shape** = (5, 1) 3.2 - LSTM backward pass 3.2.1 One Step backwardThe LSTM backward pass is slighltly more complicated than the forward one. We have provided you with all the equations for the LSTM backward pass below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.) 3.2.2 gate derivatives$$d \Gamma_o^{\langle t \rangle} = da_{next}*\tanh(c_{next}) * \Gamma_o^{\langle t \rangle}*(1-\Gamma_o^{\langle t \rangle})\tag{7}$$$$d\tilde c^{\langle t \rangle} = dc_{next}*\Gamma_u^{\langle t \rangle}+ \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * i_t * da_{next} * \tilde c^{\langle t \rangle} * (1-\tanh(\tilde c)^2) \tag{8}$$$$d\Gamma_u^{\langle t \rangle} = dc_{next}*\tilde c^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * \tilde c^{\langle t \rangle} * da_{next}*\Gamma_u^{\langle t \rangle}*(1-\Gamma_u^{\langle t \rangle})\tag{9}$$$$d\Gamma_f^{\langle t \rangle} = dc_{next}*\tilde c_{prev} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * c_{prev} * da_{next}*\Gamma_f^{\langle t \rangle}*(1-\Gamma_f^{\langle t \rangle})\tag{10}$$ 3.2.3 parameter derivatives $$ dW_f = d\Gamma_f^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{11} $$$$ dW_u = d\Gamma_u^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{12} $$$$ dW_c = d\tilde c^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{13} $$$$ dW_o = d\Gamma_o^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{14}$$To calculate $db_f, db_u, db_c, db_o$ you just need to sum across the horizontal (axis= 1) axis on $d\Gamma_f^{\langle t \rangle}, d\Gamma_u^{\langle t \rangle}, d\tilde c^{\langle t \rangle}, d\Gamma_o^{\langle t \rangle}$ respectively. Note that you should have the `keep_dims = True` option.Finally, you will compute the derivative with respect to the previous hidden state, previous memory state, and input.$$ da_{prev} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c^{\langle t \rangle} + W_o^T * d\Gamma_o^{\langle t \rangle} \tag{15}$$Here, the weights for equations 13 are the first n_a, (i.e. $W_f = W_f[:n_a,:]$ etc...)$$ dc_{prev} = dc_{next}\Gamma_f^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} * (1- \tanh(c_{next})^2)*\Gamma_f^{\langle t \rangle}*da_{next} \tag{16}$$$$ dx^{\langle t \rangle} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c_t + W_o^T * d\Gamma_o^{\langle t \rangle}\tag{17} $$where the weights for equation 15 are from n_a to the end, (i.e. $W_f = W_f[n_a:,:]$ etc...)**Exercise:** Implement `lstm_cell_backward` by implementing equations $7-17$ below. Good luck! :)
###Code
def lstm_cell_backward(da_next, dc_next, cache):
"""
Implement the backward pass for the LSTM-cell (single time-step).
Arguments:
da_next -- Gradients of next hidden state, of shape (n_a, m)
dc_next -- Gradients of next cell state, of shape (n_a, m)
cache -- cache storing information from the forward pass
Returns:
gradients -- python dictionary containing:
dxt -- Gradient of input data at time-step t, of shape (n_x, m)
da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the output gate, of shape (n_a, 1)
"""
# Retrieve information from "cache"
(a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache
### START CODE HERE ###
# Retrieve dimensions from xt's and a_next's shape (≈2 lines)
n_x, m = None
n_a, m = None
# Compute gates related derivatives, you can find their values can be found by looking carefully at equations (7) to (10) (≈4 lines)
dot = None
dcct = None
dit = None
dft = None
# Code equations (7) to (10) (≈4 lines)
dit = None
dft = None
dot = None
dcct = None
# Compute parameters related derivatives. Use equations (11)-(14) (≈8 lines)
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (15)-(17). (≈3 lines)
da_prev = None
dc_prev = None
dxt = None
### END CODE HERE ###
# Save gradients in dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dc_prev": dc_prev, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
da_next = np.random.randn(5,10)
dc_next = np.random.randn(5,10)
gradients = lstm_cell_backward(da_next, dc_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dc_prev\"][2][3] =", gradients["dc_prev"][2][3])
print("gradients[\"dc_prev\"].shape =", gradients["dc_prev"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
###Output
_____no_output_____
###Markdown
**Expected Output**: **gradients["dxt"][1][2]** = 3.23055911511 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = -0.0639621419711 **gradients["da_prev"].shape** = (5, 10) **gradients["dc_prev"][2][3]** = 0.797522038797 **gradients["dc_prev"].shape** = (5, 10) **gradients["dWf"][3][1]** = -0.147954838164 **gradients["dWf"].shape** = (5, 8) **gradients["dWi"][1][2]** = 1.05749805523 **gradients["dWi"].shape** = (5, 8) **gradients["dWc"][3][1]** = 2.30456216369 **gradients["dWc"].shape** = (5, 8) **gradients["dWo"][1][2]** = 0.331311595289 **gradients["dWo"].shape** = (5, 8) **gradients["dbf"][4]** = [ 0.18864637] **gradients["dbf"].shape** = (5, 1) **gradients["dbi"][4]** = [-0.40142491] **gradients["dbi"].shape** = (5, 1) **gradients["dbc"][4]** = [ 0.25587763] **gradients["dbc"].shape** = (5, 1) **gradients["dbo"][4]** = [ 0.13893342] **gradients["dbo"].shape** = (5, 1) 3.3 Backward pass through the LSTM RNNThis part is very similar to the `rnn_backward` function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients. **Instructions**: Implement the `lstm_backward` function. Create a for loop starting from $T_x$ and going backward. For each step call `lstm_cell_backward` and update the your old gradients by adding the new gradients to them. Note that `dxt` is not updated but is stored.
###Code
def lstm_backward(da, caches):
"""
Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).
Arguments:
da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)
dc -- Gradients w.r.t the memory states, numpy-array of shape (n_a, m, T_x)
caches -- cache storing information from the forward pass (lstm_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient of inputs, of shape (n_x, m, T_x)
da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
"""
# Retrieve values from the first cache (t=1) of caches.
(caches, x) = caches
(a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]
### START CODE HERE ###
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = None
n_x, m = None
# initialize the gradients with the right sizes (≈12 lines)
dx = None
da0 = None
da_prevt = None
dc_prevt = None
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# loop back over the whole sequence
for t in reversed(range(None)):
# Compute all gradients using lstm_cell_backward
gradients = None
# Store or add the gradient to the parameters' previous step's gradient
dx[:,:,t] = None
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# Set the first activation's gradient to the backpropagated gradient da_prev.
da0 = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = lstm_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
###Output
_____no_output_____ |
Predict house prices/predict-house-prices.ipynb | ###Markdown
**Feature Engineering** **MSSublClass**
###Code
plt.figure(figsize=(7,5))
sns.boxplot(x = 'MSSubClass',y = 'SalePrice',data = train)
a = pd.DataFrame(train.groupby('MSSubClass')['SalePrice'].mean()).sort_values(by='SalePrice',ascending=False)
b = np.array(a.index)
c = np.arange(len(a),0,-1)
for i in range(len(c)):
train.loc[train['MSSubClass']==b[i],'MSSubClass']=c[i]
for j in range(len(c)):
test.loc[test['MSSubClass']==b[j],'MSSubClass']=c[j]
test.loc[test['MSSubClass']==150,'MSSubClass']= 1
###Output
_____no_output_____
###Markdown
**MSZone**
###Code
plt.figure(figsize=(7,5))
sns.boxplot(x = 'MSZoning',y = 'SalePrice',data = train)
a = pd.DataFrame(train.groupby('MSZoning')['SalePrice'].mean()).sort_values('SalePrice')
b = np.array(a.index)
c = np.arange(1,len(a)+1)
for i in range(len(c)):
train.loc[train['MSZoning']==b[i],'MSZoning']=c[i]
test['MSZoning']=test['MSZoning'].fillna('RL')
for i in range(len(c)):
test.loc[test['MSZoning']==b[i],'MSZoning']=c[i]
###Output
_____no_output_____
###Markdown
**LotFrontage** LotFrontage has 259 missing values. Imputing it with mean values
###Code
a = train.loc[train['LotFrontage'].isna()==False,'LotFrontage'].mean()
train['LotFrontage'] = train['LotFrontage'].fillna(a)
test['LotFrontage'] = test['LotFrontage'].fillna(train['LotFrontage'].mean())
sns.scatterplot(x ='LotFrontage',y= 'SalePrice',data=train)
###Output
_____no_output_____
###Markdown
**Street**
###Code
plt.figure(figsize=(7,5))
sns.boxplot(x = 'Street',y = 'SalePrice',data = train)
a = pd.DataFrame(train.groupby('Street')['SalePrice'].mean()).sort_values('SalePrice')
b = np.array(a.index)
c = np.arange(1,len(a)+1)
for i in range(len(c)):
train.loc[train['Street']==b[i],'Street']=c[i]
for i in range(len(c)):
test.loc[test['Street']==b[i],'Street']=c[i]
###Output
_____no_output_____
###Markdown
**Alley** Alley has 94% missing values. So removing it.
###Code
train = train.drop(['Alley'],axis=1)
test= test.drop(['Alley'],axis=1)
###Output
_____no_output_____
###Markdown
**LotShape**
###Code
plt.figure(figsize=(7,5))
sns.boxplot(x = 'LotShape',y = 'SalePrice',data = train)
plt.show()
a = pd.DataFrame(train.groupby('LotShape')['SalePrice'].mean()).sort_values('SalePrice')
b = np.array(a.index)
c = np.arange(1,len(a)+1)
for i in range(len(c)):
train.loc[train['LotShape']==b[i],'LotShape']=c[i]
for i in range(len(c)):
test.loc[test['LotShape']==b[i],'LotShape']=c[i]
###Output
_____no_output_____
###Markdown
Alley,FireplaceQu,Fence,PoolQC,MiscFeature have lot of missing values.Removing those features
###Code
train = train.drop(['FireplaceQu','Fence','PoolQC','MiscFeature'],axis=1)
test = test.drop(['FireplaceQu','Fence','PoolQC','MiscFeature'],axis=1)
###Output
_____no_output_____
###Markdown
**Other columns with Random Categorical Values** LandContour,Utilities,LotConfig,LandSlope,Neighbourhood,Condition1,Condition2,BldgType,HouseStyle,RoofStyle,RoofMatl,Exterior1st,Exterior2md,Foundation,Heating,HeatingQC,CentralAir,,Functional,PavedDrive,SaleType all have categorical data without any missing values.'MasVnrType','GarageType','GarageFinish','Electrical','BsmtFinType1','BsmtFinType2' have some missing values.Label Encoding the data based on mean of SalePrice:
###Code
b = ['MasVnrType','GarageType','GarageFinish','Electrical','BsmtFinType1','BsmtFinType2']
#MasVnrType we'll change all Nan values to None
train['MasVnrType'] = train['MasVnrType'].fillna('None')
test['MasVnrType'] = test['MasVnrType'].fillna('None')
#GarageType Nan values become None meaning no garages. Same for Garage Finish
train['GarageType'] = train['GarageType'].fillna('None')
train['GarageFinish'] = train['GarageFinish'].fillna('None')
test['GarageType'] = test['GarageType'].fillna('None')
test['GarageFinish'] = test['GarageFinish'].fillna('None')
train['Electrical'] = train['Electrical'].fillna('SBrkr')
train['BsmtFinType1'] = train['BsmtFinType1'].fillna('None')
train['BsmtFinType2'] = train['BsmtFinType2'].fillna('None')
test['BsmtFinType1'] = test['BsmtFinType1'].fillna('None')
test['BsmtFinType2'] = test['BsmtFinType2'].fillna('None')
test['Exterior1st'] = test['Exterior1st'].fillna('VinylSd')
test['Exterior2nd'] = test['Exterior2nd'].fillna('VinylSd')
test['SaleType'] = test['SaleType'].fillna('WD')
test['Functional'] = test['Functional'].fillna('Typ')
test['Utilities'] = test['Utilities'].fillna('AllPub')
l = ['LandContour','Utilities','LotConfig','LandSlope','Neighborhood','Condition1','Condition2',
'BldgType','HouseStyle','RoofStyle','RoofMatl','Exterior1st','Exterior2nd','Foundation','Heating',
'HeatingQC','CentralAir','Functional','PavedDrive','SaleType','MasVnrType','GarageType','GarageFinish','Electrical','BsmtFinType1','BsmtFinType2','SaleCondition']
for i in enumerate(l):
a = pd.DataFrame(train.groupby(i[1])['SalePrice'].mean()).sort_values('SalePrice')
b = np.array(a.index)
c = np.arange(1,len(a)+1)
for j in range(len(c)):
train.loc[train[i[1]]==b[j],i[1]]=c[j]
for j in range(len(c)):
test.loc[test[i[1]]==b[j],i[1]]=c[j]
###Output
_____no_output_____
###Markdown
**Columns with Categorical Values which can be numbered**
###Code
b = ['ExterQual','ExterCond','GarageQual','GarageCond','KitchenQual','BsmtCond','BsmtExposure','BsmtQual']
train['GarageQual'] = train['GarageQual'].fillna('Po')
train['GarageCond'] = train['GarageCond'].fillna('Po')
train['BsmtCond'] = train['BsmtCond'].fillna('Fa')
train['BsmtExposure'] = train['BsmtExposure'].fillna('No')
train['BsmtQual'] = train['BsmtQual'].fillna('Fa')
test['GarageQual'] = test['GarageQual'].fillna('Po')
test['GarageCond'] = test['GarageCond'].fillna('Po')
test['BsmtCond'] = test['BsmtCond'].fillna('Fa')
test['BsmtExposure'] = test['BsmtExposure'].fillna('No')
test['BsmtQual'] = test['BsmtQual'].fillna('Fa')
test['KitchenQual'] = test['KitchenQual'].fillna('TA')
for i in enumerate(['ExterQual','KitchenQual','BsmtQual']):
train.loc[train[i[1]]=='Fa',i[1]] = 1
train.loc[train[i[1]]=='TA',i[1]] = 2
train.loc[train[i[1]]=='Gd',i[1]] = 3
train.loc[train[i[1]]=='Ex',i[1]] = 4
test.loc[test[i[1]]=='Fa',i[1]] = 1
test.loc[test[i[1]]=='TA',i[1]] = 2
test.loc[test[i[1]]=='Gd',i[1]] = 3
test.loc[test[i[1]]=='Ex',i[1]] = 4
for i in enumerate(['ExterCond','GarageQual','GarageCond','BsmtCond']):
train.loc[train[i[1]]=='Po',i[1]] = 1
train.loc[train[i[1]]=='Fa',i[1]] = 2
train.loc[train[i[1]]=='TA',i[1]] = 3
train.loc[train[i[1]]=='Gd',i[1]] = 4
train.loc[train[i[1]]=='Ex',i[1]] = 5
test.loc[test[i[1]]=='Po',i[1]] = 1
test.loc[test[i[1]]=='Fa',i[1]] = 2
test.loc[test[i[1]]=='TA',i[1]] = 3
test.loc[test[i[1]]=='Gd',i[1]] = 4
test.loc[test[i[1]]=='Ex',i[1]] = 5
for i in enumerate(['BsmtExposure']):
train.loc[train[i[1]]=='No',i[1]] = 1
train.loc[train[i[1]]=='Mn',i[1]] = 2
train.loc[train[i[1]]=='Av',i[1]] = 3
train.loc[train[i[1]]=='Gd',i[1]] = 4
test.loc[test[i[1]]=='No',i[1]] = 1
test.loc[test[i[1]]=='Mn',i[1]] = 2
test.loc[test[i[1]]=='Av',i[1]] = 3
test.loc[test[i[1]]=='Gd',i[1]] = 4
train['MasVnrArea'] = train['MasVnrArea'].fillna(train['MasVnrArea'].mean())
test['MasVnrArea'] = test['MasVnrArea'].fillna(test['MasVnrArea'].mean())
train.loc[train['GarageYrBlt'].isna()==True,'GarageYrBlt'] = np.array(train.loc[train['GarageYrBlt'].isna()==True,'YearBuilt'])
test.loc[test['GarageYrBlt'].isna()==True,'GarageYrBlt'] = np.array(test.loc[test['GarageYrBlt'].isna()==True,'YearBuilt'])
test['BsmtFullBath'] = test['BsmtFullBath'].fillna(0.0)
test['BsmtFinSF1'] = test['BsmtFinSF1'].fillna(0.0)
test['BsmtUnfSF'] = test['BsmtUnfSF'].fillna(0.0)
test['TotalBsmtSF'] = test['TotalBsmtSF'].fillna(test['TotalBsmtSF'].mean())
test['GarageCars'] = test['GarageCars'].fillna(2.0)
train = train.drop(['Id','MiscVal','PoolArea','ScreenPorch','3SsnPorch','EnclosedPorch','LowQualFinSF'],axis=1)
test = test.drop(['Id','MiscVal','PoolArea','ScreenPorch','3SsnPorch','EnclosedPorch','LowQualFinSF'],axis=1)
train = train.apply(pd.to_numeric)
test = test.apply(pd.to_numeric)
fig = plt.figure(figsize = (30,30))
sns.heatmap(train.corr())
a = pd.DataFrame(train.corr())
b = np.array(a.columns)
value=0.75
c=[]
for i in enumerate(b):
for j in enumerate(b):
if i<j:
if a.loc[i[1],j[1]]>value:
print(i[1] + ' and ' + j[1] + ' : '+ str(a.loc[i[1],j[1]]))
###Output
OverallQual and SalePrice : 0.7909816005838047
YearBuilt and GarageYrBlt : 0.8451406660104657
Exterior1st and Exterior2nd : 0.8914286477600583
TotalBsmtSF and 1stFlrSF : 0.8195299750050355
GrLivArea and TotRmsAbvGrd : 0.8254893743088377
GarageCars and GarageArea : 0.8824754142814603
GarageQual and GarageCond : 0.9185481833548472
###Markdown
**Observations from Correlation Heatmap** 1. YearBuilt and GarageYrBlt are highly correlated2. Exterior1st and Exterior2nd are highly correlated3. TotalBsmtSF and 1stFlrSF are highly correlated4. GrLivArea and TotRmsAbvGrd are highly correlated5. GarageCars and GarageArea are highlt correlated6. GarageQual and GarageCond are highly correlated.Based on these observations we can drop GarageYrBlt, Exterior2nd, 1stFlrSF, TotRmsAbvGrd, GarageArea, GarageCond Also based on correlation matrix we can say that Utilities,LandSlope,Condition2,ExterCond,BsmtFinSF2,BsmtHalfBath have a lot of random values wrt Sale Price.So we have to drop all these columns.
###Code
train = train.drop(['GarageYrBlt','Exterior2nd','1stFlrSF','TotRmsAbvGrd','GarageArea','GarageCond','Utilities','LandSlope','Condition2','ExterCond','BsmtFinSF2','BsmtHalfBath'],axis=1)
test = test.drop(['GarageYrBlt','Exterior2nd','1stFlrSF','TotRmsAbvGrd','GarageArea','GarageCond','Utilities','LandSlope','Condition2','ExterCond','BsmtFinSF2','BsmtHalfBath'],axis=1)
x = train.drop(['SalePrice'],axis=1)
y = train['SalePrice']
a = ['LotFrontage','LotArea','YearBuilt','YearRemodAdd','MasVnrArea','BsmtFinSF1','BsmtUnfSF','TotalBsmtSF','2ndFlrSF','GrLivArea','WoodDeckSF','YrSold']
ct = ColumnTransformer([('name',StandardScaler(),a)],remainder='passthrough')
x_norm = pd.DataFrame(ct.fit_transform(x))
test_norm = pd.DataFrame(ct.transform(test))
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2,random_state=42)
x1_train,x1_test,y1_train,y1_test = train_test_split(x_norm,y,test_size=0.2,random_state=42)
test_Id = pd.read_csv("/kaggle/input/house-prices-advanced-regression-techniques/test.csv")['Id']
###Output
_____no_output_____
###Markdown
**XGBRegressor**
###Code
#XGBoost
model = XGBRegressor(random_state=42,booster='gbtree',eta=0,max_depth=3,learning_rate=0.09,n_estimators=600,reg_alpha=0.01,reg_lambda = 0.1)
model.fit(x_train,y_train)
a = model.predict(test)
###Output
_____no_output_____
###Markdown
**CatBoostRegressor**
###Code
model = CatBoostRegressor(random_state=42,iterations=10000,l2_leaf_reg=50,rsm=0.99,depth=5,random_strength =0.1)
eval_pool = Pool(x_test,y_test)
model.fit(x_train, y_train, eval_set=eval_pool, early_stopping_rounds=10)
b= model.predict(test)
###Output
0: learn: 76096.0247268 test: 86390.2133063 best: 86390.2133063 (0) total: 56ms remaining: 9m 20s
1: learn: 75041.3129941 test: 85258.7401047 best: 85258.7401047 (1) total: 61.6ms remaining: 5m 7s
2: learn: 74044.8427633 test: 84223.9417016 best: 84223.9417016 (2) total: 68.5ms remaining: 3m 48s
3: learn: 73019.5192629 test: 83155.0206792 best: 83155.0206792 (3) total: 75.2ms remaining: 3m 7s
4: learn: 72059.7745523 test: 82160.4053913 best: 82160.4053913 (4) total: 81.4ms remaining: 2m 42s
5: learn: 71072.5208580 test: 81130.8733606 best: 81130.8733606 (5) total: 87.6ms remaining: 2m 25s
6: learn: 70161.3740641 test: 80149.5776854 best: 80149.5776854 (6) total: 93.8ms remaining: 2m 13s
7: learn: 69221.6435400 test: 79165.5535872 best: 79165.5535872 (7) total: 100ms remaining: 2m 4s
8: learn: 68302.5790311 test: 78202.3765672 best: 78202.3765672 (8) total: 106ms remaining: 1m 58s
9: learn: 67456.0243452 test: 77336.2760552 best: 77336.2760552 (9) total: 112ms remaining: 1m 52s
10: learn: 66550.7927330 test: 76373.1019185 best: 76373.1019185 (10) total: 117ms remaining: 1m 46s
11: learn: 65727.2364396 test: 75522.9484866 best: 75522.9484866 (11) total: 120ms remaining: 1m 39s
12: learn: 64882.7840952 test: 74670.1445557 best: 74670.1445557 (12) total: 122ms remaining: 1m 33s
13: learn: 64114.3371420 test: 73867.6314187 best: 73867.6314187 (13) total: 125ms remaining: 1m 28s
14: learn: 63327.0511059 test: 73065.8490434 best: 73065.8490434 (14) total: 127ms remaining: 1m 24s
15: learn: 62619.7931584 test: 72360.8994394 best: 72360.8994394 (15) total: 129ms remaining: 1m 20s
16: learn: 61847.0858228 test: 71572.8728371 best: 71572.8728371 (16) total: 131ms remaining: 1m 17s
17: learn: 61126.4177272 test: 70829.1973664 best: 70829.1973664 (17) total: 134ms remaining: 1m 14s
18: learn: 60425.0264228 test: 70111.1794253 best: 70111.1794253 (18) total: 136ms remaining: 1m 11s
19: learn: 59746.5471736 test: 69374.4984375 best: 69374.4984375 (19) total: 138ms remaining: 1m 9s
20: learn: 59084.9690109 test: 68668.2143347 best: 68668.2143347 (20) total: 141ms remaining: 1m 6s
21: learn: 58426.9063598 test: 67986.1374232 best: 67986.1374232 (21) total: 143ms remaining: 1m 4s
22: learn: 57741.8181689 test: 67262.3778343 best: 67262.3778343 (22) total: 145ms remaining: 1m 2s
23: learn: 57070.5576701 test: 66567.4375675 best: 66567.4375675 (23) total: 147ms remaining: 1m 1s
24: learn: 56447.8281911 test: 65895.5965473 best: 65895.5965473 (24) total: 149ms remaining: 59.6s
25: learn: 55871.0075030 test: 65311.6562588 best: 65311.6562588 (25) total: 152ms remaining: 58.2s
26: learn: 55276.5542905 test: 64658.4636606 best: 64658.4636606 (26) total: 154ms remaining: 56.8s
27: learn: 54693.4561581 test: 64056.2506335 best: 64056.2506335 (27) total: 156ms remaining: 55.6s
28: learn: 54106.6514705 test: 63438.8257729 best: 63438.8257729 (28) total: 158ms remaining: 54.5s
29: learn: 53562.3440433 test: 62837.6991181 best: 62837.6991181 (29) total: 161ms remaining: 53.4s
30: learn: 53076.7561307 test: 62301.5895204 best: 62301.5895204 (30) total: 163ms remaining: 52.4s
31: learn: 52501.5963345 test: 61712.8282377 best: 61712.8282377 (31) total: 165ms remaining: 51.4s
32: learn: 52014.5417828 test: 61165.8042057 best: 61165.8042057 (32) total: 167ms remaining: 50.5s
33: learn: 51502.3794876 test: 60627.4032447 best: 60627.4032447 (33) total: 169ms remaining: 49.6s
34: learn: 51025.1942900 test: 60102.6732173 best: 60102.6732173 (34) total: 172ms remaining: 48.8s
35: learn: 50534.2785326 test: 59610.7227701 best: 59610.7227701 (35) total: 174ms remaining: 48.1s
36: learn: 50114.3196685 test: 59181.5809980 best: 59181.5809980 (36) total: 176ms remaining: 47.4s
37: learn: 49658.9858929 test: 58714.4997028 best: 58714.4997028 (37) total: 178ms remaining: 46.7s
38: learn: 49202.1511305 test: 58231.4485388 best: 58231.4485388 (38) total: 180ms remaining: 46.1s
39: learn: 48763.4100996 test: 57744.6198453 best: 57744.6198453 (39) total: 182ms remaining: 45.4s
40: learn: 48326.2396264 test: 57296.0137477 best: 57296.0137477 (40) total: 185ms remaining: 44.9s
41: learn: 47917.9909043 test: 56870.7731428 best: 56870.7731428 (41) total: 187ms remaining: 44.4s
42: learn: 47503.5850128 test: 56436.8535868 best: 56436.8535868 (42) total: 189ms remaining: 43.8s
43: learn: 47072.3809195 test: 55987.8142111 best: 55987.8142111 (43) total: 192ms remaining: 43.4s
44: learn: 46656.0360027 test: 55559.4863757 best: 55559.4863757 (44) total: 194ms remaining: 42.9s
45: learn: 46240.2194550 test: 55143.4605274 best: 55143.4605274 (45) total: 196ms remaining: 42.5s
46: learn: 45869.4980737 test: 54748.4042019 best: 54748.4042019 (46) total: 198ms remaining: 42s
47: learn: 45522.9517432 test: 54387.4508945 best: 54387.4508945 (47) total: 201ms remaining: 41.6s
48: learn: 45132.0116593 test: 53971.5948309 best: 53971.5948309 (48) total: 203ms remaining: 41.2s
49: learn: 44762.6148347 test: 53570.0495716 best: 53570.0495716 (49) total: 205ms remaining: 40.8s
50: learn: 44398.7964973 test: 53182.3961227 best: 53182.3961227 (50) total: 207ms remaining: 40.4s
51: learn: 44068.8353212 test: 52838.9029154 best: 52838.9029154 (51) total: 209ms remaining: 40s
52: learn: 43712.9277567 test: 52457.7173695 best: 52457.7173695 (52) total: 211ms remaining: 39.7s
53: learn: 43359.4507859 test: 52071.7325191 best: 52071.7325191 (53) total: 214ms remaining: 39.3s
54: learn: 43013.3229689 test: 51703.1650645 best: 51703.1650645 (54) total: 216ms remaining: 39s
55: learn: 42738.8008644 test: 51411.3448408 best: 51411.3448408 (55) total: 218ms remaining: 38.7s
56: learn: 42438.5937246 test: 51076.6092786 best: 51076.6092786 (56) total: 220ms remaining: 38.4s
57: learn: 42151.7870005 test: 50782.0554199 best: 50782.0554199 (57) total: 222ms remaining: 38.1s
58: learn: 41852.0392738 test: 50448.4523783 best: 50448.4523783 (58) total: 225ms remaining: 37.8s
59: learn: 41545.5724828 test: 50121.7889142 best: 50121.7889142 (59) total: 227ms remaining: 37.6s
60: learn: 41254.8009376 test: 49814.2038852 best: 49814.2038852 (60) total: 229ms remaining: 37.3s
61: learn: 40967.2481556 test: 49496.1720414 best: 49496.1720414 (61) total: 231ms remaining: 37.1s
62: learn: 40671.7317665 test: 49185.7800427 best: 49185.7800427 (62) total: 233ms remaining: 36.8s
63: learn: 40491.0534802 test: 49004.2120370 best: 49004.2120370 (63) total: 236ms remaining: 36.6s
64: learn: 40340.6861729 test: 48850.9124474 best: 48850.9124474 (64) total: 238ms remaining: 36.3s
65: learn: 40087.8628743 test: 48548.4530800 best: 48548.4530800 (65) total: 240ms remaining: 36.1s
66: learn: 39914.4756290 test: 48374.2094999 best: 48374.2094999 (66) total: 242ms remaining: 35.9s
67: learn: 39660.7347711 test: 48112.3889374 best: 48112.3889374 (67) total: 244ms remaining: 35.7s
68: learn: 39481.6465031 test: 47935.0109517 best: 47935.0109517 (68) total: 248ms remaining: 35.6s
69: learn: 39227.1339884 test: 47659.0858234 best: 47659.0858234 (69) total: 250ms remaining: 35.4s
70: learn: 39067.7331505 test: 47499.2469773 best: 47499.2469773 (70) total: 252ms remaining: 35.2s
71: learn: 38942.6496690 test: 47356.1433655 best: 47356.1433655 (71) total: 257ms remaining: 35.4s
72: learn: 38704.4344485 test: 47093.6958506 best: 47093.6958506 (72) total: 260ms remaining: 35.3s
73: learn: 38492.8923127 test: 46864.6797371 best: 46864.6797371 (73) total: 263ms remaining: 35.2s
74: learn: 38352.0494309 test: 46728.9365535 best: 46728.9365535 (74) total: 265ms remaining: 35s
75: learn: 38216.9849869 test: 46592.5055298 best: 46592.5055298 (75) total: 267ms remaining: 34.9s
76: learn: 38093.7888336 test: 46453.9526745 best: 46453.9526745 (76) total: 269ms remaining: 34.7s
77: learn: 37947.9745216 test: 46309.6545092 best: 46309.6545092 (77) total: 271ms remaining: 34.5s
78: learn: 37828.2387505 test: 46175.5546485 best: 46175.5546485 (78) total: 273ms remaining: 34.3s
79: learn: 37614.9678117 test: 45953.2996477 best: 45953.2996477 (79) total: 276ms remaining: 34.2s
80: learn: 37447.8484540 test: 45779.5380830 best: 45779.5380830 (80) total: 278ms remaining: 34s
81: learn: 37250.9281375 test: 45593.3337118 best: 45593.3337118 (81) total: 280ms remaining: 33.8s
82: learn: 37127.1394189 test: 45466.3265847 best: 45466.3265847 (82) total: 282ms remaining: 33.7s
83: learn: 36957.6363420 test: 45286.2141046 best: 45286.2141046 (83) total: 284ms remaining: 33.6s
84: learn: 36754.1760213 test: 45076.9091323 best: 45076.9091323 (84) total: 286ms remaining: 33.4s
85: learn: 36559.6587268 test: 44861.2383308 best: 44861.2383308 (85) total: 289ms remaining: 33.3s
86: learn: 36454.1096284 test: 44739.5050487 best: 44739.5050487 (86) total: 291ms remaining: 33.2s
87: learn: 36345.6063002 test: 44639.2617541 best: 44639.2617541 (87) total: 294ms remaining: 33.1s
88: learn: 36225.5116281 test: 44527.7112525 best: 44527.7112525 (88) total: 297ms remaining: 33s
89: learn: 36027.4550598 test: 44315.7936727 best: 44315.7936727 (89) total: 299ms remaining: 32.9s
90: learn: 35920.8213330 test: 44209.7645616 best: 44209.7645616 (90) total: 301ms remaining: 32.8s
91: learn: 35751.8546985 test: 44029.7057395 best: 44029.7057395 (91) total: 303ms remaining: 32.7s
92: learn: 35624.1288588 test: 43895.1178966 best: 43895.1178966 (92) total: 306ms remaining: 32.5s
93: learn: 35477.5374286 test: 43748.1651852 best: 43748.1651852 (93) total: 308ms remaining: 32.4s
94: learn: 35377.1962662 test: 43637.6329967 best: 43637.6329967 (94) total: 310ms remaining: 32.3s
95: learn: 35207.8389270 test: 43456.9093263 best: 43456.9093263 (95) total: 312ms remaining: 32.2s
96: learn: 35113.2050510 test: 43346.7874404 best: 43346.7874404 (96) total: 314ms remaining: 32.1s
97: learn: 34984.9633212 test: 43219.0636483 best: 43219.0636483 (97) total: 316ms remaining: 32s
98: learn: 34794.3141279 test: 43026.5523973 best: 43026.5523973 (98) total: 319ms remaining: 31.9s
99: learn: 34688.1357424 test: 42918.5879829 best: 42918.5879829 (99) total: 322ms remaining: 31.9s
100: learn: 34586.6739746 test: 42825.4607999 best: 42825.4607999 (100) total: 324ms remaining: 31.8s
101: learn: 34429.9786740 test: 42666.5334078 best: 42666.5334078 (101) total: 326ms remaining: 31.7s
102: learn: 34269.2048936 test: 42508.2179686 best: 42508.2179686 (102) total: 329ms remaining: 31.6s
103: learn: 34116.7389586 test: 42355.1306233 best: 42355.1306233 (103) total: 331ms remaining: 31.5s
104: learn: 34029.6172858 test: 42253.2943132 best: 42253.2943132 (104) total: 333ms remaining: 31.4s
105: learn: 33869.0810150 test: 42097.4467541 best: 42097.4467541 (105) total: 335ms remaining: 31.3s
106: learn: 33716.7083812 test: 41945.6818590 best: 41945.6818590 (106) total: 337ms remaining: 31.2s
107: learn: 33587.8005102 test: 41819.8779642 best: 41819.8779642 (107) total: 339ms remaining: 31.1s
108: learn: 33419.5498088 test: 41653.0766045 best: 41653.0766045 (108) total: 341ms remaining: 31s
109: learn: 33282.0374208 test: 41514.9727401 best: 41514.9727401 (109) total: 344ms remaining: 30.9s
110: learn: 33185.0827349 test: 41432.1980907 best: 41432.1980907 (110) total: 346ms remaining: 30.8s
111: learn: 33106.6040392 test: 41342.9713382 best: 41342.9713382 (111) total: 348ms remaining: 30.7s
112: learn: 32999.9645119 test: 41235.1796841 best: 41235.1796841 (112) total: 350ms remaining: 30.6s
113: learn: 32885.8846725 test: 41115.9763427 best: 41115.9763427 (113) total: 352ms remaining: 30.5s
114: learn: 32803.8055384 test: 41022.6767768 best: 41022.6767768 (114) total: 354ms remaining: 30.4s
115: learn: 32695.4992977 test: 40905.9203219 best: 40905.9203219 (115) total: 356ms remaining: 30.4s
116: learn: 32590.0256122 test: 40799.2767030 best: 40799.2767030 (116) total: 359ms remaining: 30.3s
117: learn: 32460.1523600 test: 40665.8083678 best: 40665.8083678 (117) total: 361ms remaining: 30.2s
118: learn: 32387.5645109 test: 40582.6239805 best: 40582.6239805 (118) total: 363ms remaining: 30.1s
119: learn: 32369.6639250 test: 40573.7353347 best: 40573.7353347 (119) total: 365ms remaining: 30.1s
120: learn: 32284.9891613 test: 40497.4522580 best: 40497.4522580 (120) total: 367ms remaining: 30s
121: learn: 32212.0076676 test: 40410.9506149 best: 40410.9506149 (121) total: 370ms remaining: 29.9s
122: learn: 32140.9900899 test: 40330.0360259 best: 40330.0360259 (122) total: 372ms remaining: 29.8s
123: learn: 32040.7633529 test: 40226.6073109 best: 40226.6073109 (123) total: 374ms remaining: 29.8s
124: learn: 31959.4175927 test: 40153.8792116 best: 40153.8792116 (124) total: 376ms remaining: 29.7s
125: learn: 31862.4522948 test: 40057.3975164 best: 40057.3975164 (125) total: 378ms remaining: 29.6s
126: learn: 31765.3009129 test: 39967.9112289 best: 39967.9112289 (126) total: 380ms remaining: 29.6s
127: learn: 31669.9135506 test: 39869.1435283 best: 39869.1435283 (127) total: 382ms remaining: 29.5s
128: learn: 31605.4211711 test: 39803.8653371 best: 39803.8653371 (128) total: 385ms remaining: 29.4s
129: learn: 31570.9953052 test: 39780.0135615 best: 39780.0135615 (129) total: 387ms remaining: 29.4s
130: learn: 31505.4674300 test: 39711.7034380 best: 39711.7034380 (130) total: 389ms remaining: 29.3s
131: learn: 31439.8224413 test: 39632.8179412 best: 39632.8179412 (131) total: 392ms remaining: 29.3s
132: learn: 31354.8964532 test: 39550.0360815 best: 39550.0360815 (132) total: 394ms remaining: 29.2s
133: learn: 31235.8820167 test: 39431.5637420 best: 39431.5637420 (133) total: 396ms remaining: 29.2s
134: learn: 31150.5062422 test: 39352.0414541 best: 39352.0414541 (134) total: 398ms remaining: 29.1s
135: learn: 31088.2012458 test: 39289.9077563 best: 39289.9077563 (135) total: 400ms remaining: 29s
136: learn: 31016.6553013 test: 39226.5822086 best: 39226.5822086 (136) total: 403ms remaining: 29s
137: learn: 30942.5236301 test: 39158.8752193 best: 39158.8752193 (137) total: 405ms remaining: 28.9s
138: learn: 30888.1462663 test: 39097.5754352 best: 39097.5754352 (138) total: 407ms remaining: 28.9s
139: learn: 30777.4196374 test: 38992.5438456 best: 38992.5438456 (139) total: 409ms remaining: 28.8s
140: learn: 30717.6579502 test: 38922.3999954 best: 38922.3999954 (140) total: 412ms remaining: 28.8s
141: learn: 30649.0287468 test: 38843.5781518 best: 38843.5781518 (141) total: 414ms remaining: 28.8s
142: learn: 30620.5858817 test: 38805.5302918 best: 38805.5302918 (142) total: 416ms remaining: 28.7s
143: learn: 30513.3723853 test: 38699.9071535 best: 38699.9071535 (143) total: 419ms remaining: 28.6s
144: learn: 30394.8398261 test: 38573.5552567 best: 38573.5552567 (144) total: 421ms remaining: 28.6s
145: learn: 30291.7254488 test: 38472.6998566 best: 38472.6998566 (145) total: 423ms remaining: 28.6s
146: learn: 30273.7436590 test: 38449.7942003 best: 38449.7942003 (146) total: 425ms remaining: 28.5s
147: learn: 30198.9141103 test: 38382.2940137 best: 38382.2940137 (147) total: 428ms remaining: 28.5s
148: learn: 30169.7855610 test: 38343.4262238 best: 38343.4262238 (148) total: 430ms remaining: 28.4s
149: learn: 30126.7764313 test: 38306.8284110 best: 38306.8284110 (149) total: 432ms remaining: 28.4s
150: learn: 30047.5645762 test: 38226.6193088 best: 38226.6193088 (150) total: 436ms remaining: 28.4s
151: learn: 30019.0753141 test: 38192.7890005 best: 38192.7890005 (151) total: 438ms remaining: 28.4s
152: learn: 29978.5275046 test: 38148.1167349 best: 38148.1167349 (152) total: 441ms remaining: 28.4s
153: learn: 29962.7993129 test: 38127.3187950 best: 38127.3187950 (153) total: 443ms remaining: 28.3s
154: learn: 29881.0999618 test: 38044.4292551 best: 38044.4292551 (154) total: 445ms remaining: 28.3s
155: learn: 29813.9550432 test: 37982.0174387 best: 37982.0174387 (155) total: 447ms remaining: 28.2s
156: learn: 29780.3871369 test: 37940.2665184 best: 37940.2665184 (156) total: 450ms remaining: 28.2s
157: learn: 29767.4971401 test: 37925.9944195 best: 37925.9944195 (157) total: 452ms remaining: 28.2s
158: learn: 29709.2019723 test: 37874.3300737 best: 37874.3300737 (158) total: 454ms remaining: 28.1s
159: learn: 29610.9384872 test: 37778.4166550 best: 37778.4166550 (159) total: 457ms remaining: 28.1s
160: learn: 29590.6565280 test: 37756.4473002 best: 37756.4473002 (160) total: 459ms remaining: 28.1s
161: learn: 29529.3924435 test: 37686.9047880 best: 37686.9047880 (161) total: 462ms remaining: 28s
162: learn: 29509.3586278 test: 37665.1828781 best: 37665.1828781 (162) total: 464ms remaining: 28s
163: learn: 29492.0602816 test: 37646.3089340 best: 37646.3089340 (163) total: 466ms remaining: 27.9s
164: learn: 29481.6751882 test: 37633.7427435 best: 37633.7427435 (164) total: 468ms remaining: 27.9s
165: learn: 29439.8710472 test: 37595.5645278 best: 37595.5645278 (165) total: 471ms remaining: 27.9s
166: learn: 29353.3816936 test: 37508.4276995 best: 37508.4276995 (166) total: 473ms remaining: 27.8s
167: learn: 29327.0311604 test: 37474.3609564 best: 37474.3609564 (167) total: 475ms remaining: 27.8s
168: learn: 29309.3910123 test: 37453.9437603 best: 37453.9437603 (168) total: 478ms remaining: 27.8s
169: learn: 29271.8890831 test: 37412.4151209 best: 37412.4151209 (169) total: 480ms remaining: 27.8s
170: learn: 29207.7297294 test: 37347.6985136 best: 37347.6985136 (170) total: 482ms remaining: 27.7s
171: learn: 29190.4023392 test: 37327.6729551 best: 37327.6729551 (171) total: 485ms remaining: 27.7s
172: learn: 29171.3504947 test: 37306.2780311 best: 37306.2780311 (172) total: 487ms remaining: 27.7s
173: learn: 29158.9406389 test: 37290.0552964 best: 37290.0552964 (173) total: 490ms remaining: 27.6s
174: learn: 29079.3977054 test: 37219.9671124 best: 37219.9671124 (174) total: 492ms remaining: 27.6s
175: learn: 29008.4046418 test: 37153.3901688 best: 37153.3901688 (175) total: 494ms remaining: 27.6s
176: learn: 28952.7030166 test: 37103.9125760 best: 37103.9125760 (176) total: 496ms remaining: 27.5s
177: learn: 28904.8277035 test: 37046.8805742 best: 37046.8805742 (177) total: 499ms remaining: 27.5s
178: learn: 28887.1213228 test: 37040.2025435 best: 37040.2025435 (178) total: 501ms remaining: 27.5s
179: learn: 28810.9833151 test: 36973.0310533 best: 36973.0310533 (179) total: 503ms remaining: 27.5s
180: learn: 28796.3643285 test: 36958.4379389 best: 36958.4379389 (180) total: 506ms remaining: 27.4s
181: learn: 28780.1536672 test: 36938.6223915 best: 36938.6223915 (181) total: 508ms remaining: 27.4s
182: learn: 28763.6847407 test: 36919.3622285 best: 36919.3622285 (182) total: 510ms remaining: 27.4s
183: learn: 28683.4504111 test: 36837.3838728 best: 36837.3838728 (183) total: 513ms remaining: 27.3s
184: learn: 28601.1855998 test: 36750.5758937 best: 36750.5758937 (184) total: 515ms remaining: 27.3s
185: learn: 28522.9975035 test: 36669.9932005 best: 36669.9932005 (185) total: 517ms remaining: 27.3s
186: learn: 28513.4245959 test: 36657.0647132 best: 36657.0647132 (186) total: 519ms remaining: 27.3s
187: learn: 28428.8272767 test: 36567.4045472 best: 36567.4045472 (187) total: 522ms remaining: 27.2s
188: learn: 28419.1665072 test: 36555.7753029 best: 36555.7753029 (188) total: 524ms remaining: 27.2s
189: learn: 28409.5322073 test: 36542.8865379 best: 36542.8865379 (189) total: 526ms remaining: 27.2s
190: learn: 28380.5928256 test: 36509.1348902 best: 36509.1348902 (190) total: 529ms remaining: 27.2s
191: learn: 28347.5886042 test: 36476.4762162 best: 36476.4762162 (191) total: 531ms remaining: 27.1s
192: learn: 28274.3800479 test: 36396.9477415 best: 36396.9477415 (192) total: 533ms remaining: 27.1s
193: learn: 28180.4761437 test: 36298.2716364 best: 36298.2716364 (193) total: 536ms remaining: 27.1s
194: learn: 28165.0984433 test: 36279.3683806 best: 36279.3683806 (194) total: 538ms remaining: 27.1s
195: learn: 28141.5564841 test: 36247.3107426 best: 36247.3107426 (195) total: 540ms remaining: 27s
196: learn: 28055.9401721 test: 36184.2632540 best: 36184.2632540 (196) total: 543ms remaining: 27s
197: learn: 28041.9047714 test: 36166.4259188 best: 36166.4259188 (197) total: 545ms remaining: 27s
198: learn: 28026.8517021 test: 36147.7620043 best: 36147.7620043 (198) total: 547ms remaining: 27s
199: learn: 28009.0633946 test: 36127.4712058 best: 36127.4712058 (199) total: 550ms remaining: 26.9s
200: learn: 27994.0572787 test: 36108.9069129 best: 36108.9069129 (200) total: 552ms remaining: 26.9s
201: learn: 27984.9496011 test: 36096.4505870 best: 36096.4505870 (201) total: 554ms remaining: 26.9s
202: learn: 27970.1016264 test: 36077.9430009 best: 36077.9430009 (202) total: 557ms remaining: 26.9s
203: learn: 27957.1300224 test: 36062.1815459 best: 36062.1815459 (203) total: 559ms remaining: 26.9s
204: learn: 27870.0524577 test: 35975.3963811 best: 35975.3963811 (204) total: 562ms remaining: 26.8s
205: learn: 27817.1839587 test: 35916.7000124 best: 35916.7000124 (205) total: 564ms remaining: 26.8s
206: learn: 27802.5641797 test: 35898.5314886 best: 35898.5314886 (206) total: 566ms remaining: 26.8s
207: learn: 27787.9980663 test: 35880.4322836 best: 35880.4322836 (207) total: 568ms remaining: 26.8s
208: learn: 27737.5462723 test: 35825.8139362 best: 35825.8139362 (208) total: 571ms remaining: 26.7s
209: learn: 27722.7323078 test: 35807.4916555 best: 35807.4916555 (209) total: 573ms remaining: 26.7s
210: learn: 27655.6682403 test: 35740.8605925 best: 35740.8605925 (210) total: 576ms remaining: 26.7s
211: learn: 27577.9923628 test: 35676.8851214 best: 35676.8851214 (211) total: 578ms remaining: 26.7s
212: learn: 27563.7912143 test: 35659.1663394 best: 35659.1663394 (212) total: 580ms remaining: 26.7s
213: learn: 27493.3467130 test: 35588.0141506 best: 35588.0141506 (213) total: 583ms remaining: 26.6s
214: learn: 27479.0450734 test: 35570.2573013 best: 35570.2573013 (214) total: 585ms remaining: 26.6s
215: learn: 27438.0348697 test: 35534.1806041 best: 35534.1806041 (215) total: 587ms remaining: 26.6s
216: learn: 27371.4396442 test: 35469.8996621 best: 35469.8996621 (216) total: 590ms remaining: 26.6s
217: learn: 27355.9960937 test: 35464.6684705 best: 35464.6684705 (217) total: 592ms remaining: 26.6s
218: learn: 27344.8218148 test: 35453.2296228 best: 35453.2296228 (218) total: 595ms remaining: 26.6s
219: learn: 27331.6082641 test: 35437.0369790 best: 35437.0369790 (219) total: 597ms remaining: 26.5s
220: learn: 27318.7478722 test: 35420.5286601 best: 35420.5286601 (220) total: 599ms remaining: 26.5s
221: learn: 27270.8236073 test: 35357.2031305 best: 35357.2031305 (221) total: 601ms remaining: 26.5s
222: learn: 27256.8226156 test: 35339.7223215 best: 35339.7223215 (222) total: 604ms remaining: 26.5s
223: learn: 27244.2370833 test: 35324.0317400 best: 35324.0317400 (223) total: 606ms remaining: 26.4s
224: learn: 27231.5627384 test: 35307.7565977 best: 35307.7565977 (224) total: 608ms remaining: 26.4s
225: learn: 27217.7442043 test: 35292.0038849 best: 35292.0038849 (225) total: 610ms remaining: 26.4s
226: learn: 27204.7507039 test: 35276.1136964 best: 35276.1136964 (226) total: 612ms remaining: 26.4s
227: learn: 27196.1398795 test: 35264.4621072 best: 35264.4621072 (227) total: 614ms remaining: 26.3s
228: learn: 27183.8615605 test: 35249.0770826 best: 35249.0770826 (228) total: 617ms remaining: 26.3s
229: learn: 27171.0634049 test: 35233.3714659 best: 35233.3714659 (229) total: 619ms remaining: 26.3s
230: learn: 27156.1591302 test: 35228.3711744 best: 35228.3711744 (230) total: 621ms remaining: 26.3s
231: learn: 27143.4526608 test: 35211.9095512 best: 35211.9095512 (231) total: 623ms remaining: 26.2s
232: learn: 27125.0749858 test: 35192.9323086 best: 35192.9323086 (232) total: 625ms remaining: 26.2s
233: learn: 27112.7545609 test: 35177.0365090 best: 35177.0365090 (233) total: 627ms remaining: 26.2s
234: learn: 27029.3109493 test: 35114.9249508 best: 35114.9249508 (234) total: 630ms remaining: 26.2s
235: learn: 27018.7276756 test: 35101.1407075 best: 35101.1407075 (235) total: 632ms remaining: 26.1s
236: learn: 26958.1116659 test: 35035.8782423 best: 35035.8782423 (236) total: 634ms remaining: 26.1s
237: learn: 26910.0991474 test: 34989.9362115 best: 34989.9362115 (237) total: 636ms remaining: 26.1s
238: learn: 26898.3094565 test: 34974.3318720 best: 34974.3318720 (238) total: 638ms remaining: 26.1s
239: learn: 26882.7757590 test: 34956.2535938 best: 34956.2535938 (239) total: 640ms remaining: 26s
240: learn: 26870.7476064 test: 34940.7080305 best: 34940.7080305 (240) total: 642ms remaining: 26s
241: learn: 26858.3576476 test: 34925.4332336 best: 34925.4332336 (241) total: 644ms remaining: 26s
242: learn: 26844.6970344 test: 34914.1789480 best: 34914.1789480 (242) total: 646ms remaining: 26s
243: learn: 26833.0904326 test: 34899.4629998 best: 34899.4629998 (243) total: 648ms remaining: 25.9s
244: learn: 26791.8679506 test: 34857.1874965 best: 34857.1874965 (244) total: 651ms remaining: 25.9s
245: learn: 26734.6831241 test: 34796.9304557 best: 34796.9304557 (245) total: 653ms remaining: 25.9s
246: learn: 26721.7852585 test: 34778.6144209 best: 34778.6144209 (246) total: 655ms remaining: 25.9s
247: learn: 26713.6373509 test: 34767.5488434 best: 34767.5488434 (247) total: 657ms remaining: 25.8s
248: learn: 26661.3233712 test: 34709.5052706 best: 34709.5052706 (248) total: 659ms remaining: 25.8s
249: learn: 26650.2956810 test: 34694.7101248 best: 34694.7101248 (249) total: 661ms remaining: 25.8s
250: learn: 26638.2545561 test: 34679.7913157 best: 34679.7913157 (250) total: 663ms remaining: 25.8s
251: learn: 26627.2287522 test: 34667.0671031 best: 34667.0671031 (251) total: 665ms remaining: 25.7s
252: learn: 26619.1875649 test: 34656.1607701 best: 34656.1607701 (252) total: 667ms remaining: 25.7s
253: learn: 26564.7490073 test: 34604.6503686 best: 34604.6503686 (253) total: 670ms remaining: 25.7s
254: learn: 26551.5758526 test: 34589.3617095 best: 34589.3617095 (254) total: 672ms remaining: 25.7s
255: learn: 26540.8129345 test: 34576.3526248 best: 34576.3526248 (255) total: 674ms remaining: 25.7s
256: learn: 26527.7345404 test: 34561.1998679 best: 34561.1998679 (256) total: 676ms remaining: 25.6s
257: learn: 26513.9642587 test: 34556.8394250 best: 34556.8394250 (257) total: 678ms remaining: 25.6s
258: learn: 26476.0492529 test: 34518.0140866 best: 34518.0140866 (258) total: 681ms remaining: 25.6s
259: learn: 26467.7086430 test: 34511.8695161 best: 34511.8695161 (259) total: 683ms remaining: 25.6s
260: learn: 26457.7611110 test: 34501.5238612 best: 34501.5238612 (260) total: 685ms remaining: 25.6s
261: learn: 26440.8318147 test: 34483.0476295 best: 34483.0476295 (261) total: 687ms remaining: 25.5s
262: learn: 26428.7381323 test: 34467.6677999 best: 34467.6677999 (262) total: 689ms remaining: 25.5s
263: learn: 26381.1664263 test: 34420.2142220 best: 34420.2142220 (263) total: 691ms remaining: 25.5s
264: learn: 26367.7076979 test: 34416.0875512 best: 34416.0875512 (264) total: 693ms remaining: 25.5s
265: learn: 26355.9955595 test: 34400.9548886 best: 34400.9548886 (265) total: 695ms remaining: 25.4s
266: learn: 26306.9425350 test: 34367.5426495 best: 34367.5426495 (266) total: 697ms remaining: 25.4s
267: learn: 26258.6267278 test: 34352.9888663 best: 34352.9888663 (267) total: 700ms remaining: 25.4s
268: learn: 26238.5857511 test: 34342.2121925 best: 34342.2121925 (268) total: 702ms remaining: 25.4s
269: learn: 26227.9615624 test: 34329.3177728 best: 34329.3177728 (269) total: 704ms remaining: 25.4s
270: learn: 26217.5650222 test: 34315.2962696 best: 34315.2962696 (270) total: 706ms remaining: 25.3s
271: learn: 26206.6555211 test: 34301.4140879 best: 34301.4140879 (271) total: 708ms remaining: 25.3s
272: learn: 26195.7813191 test: 34287.5675088 best: 34287.5675088 (272) total: 710ms remaining: 25.3s
273: learn: 26130.6917496 test: 34232.6553930 best: 34232.6553930 (273) total: 712ms remaining: 25.3s
274: learn: 26057.9604350 test: 34171.1176053 best: 34171.1176053 (274) total: 715ms remaining: 25.3s
275: learn: 26047.2243186 test: 34157.3919873 best: 34157.3919873 (275) total: 717ms remaining: 25.2s
276: learn: 26039.2991768 test: 34151.6565890 best: 34151.6565890 (276) total: 719ms remaining: 25.2s
277: learn: 25982.5593139 test: 34107.1608509 best: 34107.1608509 (277) total: 721ms remaining: 25.2s
278: learn: 25974.6741000 test: 34101.4667054 best: 34101.4667054 (278) total: 723ms remaining: 25.2s
279: learn: 25964.0344139 test: 34087.8435023 best: 34087.8435023 (279) total: 725ms remaining: 25.2s
280: learn: 25956.1865870 test: 34082.1819219 best: 34082.1819219 (280) total: 727ms remaining: 25.2s
281: learn: 25948.3578596 test: 34076.5426166 best: 34076.5426166 (281) total: 729ms remaining: 25.1s
282: learn: 25940.7399750 test: 34070.9891577 best: 34070.9891577 (282) total: 731ms remaining: 25.1s
283: learn: 25930.1639846 test: 34057.4214943 best: 34057.4214943 (283) total: 734ms remaining: 25.1s
284: learn: 25877.3561820 test: 34015.2465434 best: 34015.2465434 (284) total: 736ms remaining: 25.1s
285: learn: 25869.6033260 test: 34009.6777779 best: 34009.6777779 (285) total: 738ms remaining: 25.1s
286: learn: 25858.7177359 test: 33996.0346498 best: 33996.0346498 (286) total: 740ms remaining: 25s
287: learn: 25792.7189255 test: 33948.3339370 best: 33948.3339370 (287) total: 742ms remaining: 25s
288: learn: 25782.0487829 test: 33934.2721273 best: 33934.2721273 (288) total: 745ms remaining: 25s
289: learn: 25771.2653230 test: 33920.6590722 best: 33920.6590722 (289) total: 746ms remaining: 25s
290: learn: 25760.9200118 test: 33907.3247183 best: 33907.3247183 (290) total: 749ms remaining: 25s
291: learn: 25717.3654260 test: 33865.4338132 best: 33865.4338132 (291) total: 751ms remaining: 25s
292: learn: 25709.9508487 test: 33860.0458027 best: 33860.0458027 (292) total: 753ms remaining: 24.9s
293: learn: 25702.4143808 test: 33855.6243080 best: 33855.6243080 (293) total: 755ms remaining: 24.9s
294: learn: 25692.1862227 test: 33842.4030192 best: 33842.4030192 (294) total: 757ms remaining: 24.9s
295: learn: 25668.5158609 test: 33828.4921995 best: 33828.4921995 (295) total: 760ms remaining: 24.9s
296: learn: 25616.5853692 test: 33784.9020274 best: 33784.9020274 (296) total: 762ms remaining: 24.9s
297: learn: 25606.4525306 test: 33771.7803563 best: 33771.7803563 (297) total: 764ms remaining: 24.9s
298: learn: 25598.9630632 test: 33766.4381634 best: 33766.4381634 (298) total: 766ms remaining: 24.9s
299: learn: 25557.4703495 test: 33724.2652941 best: 33724.2652941 (299) total: 768ms remaining: 24.8s
300: learn: 25547.1976411 test: 33710.6404300 best: 33710.6404300 (300) total: 770ms remaining: 24.8s
301: learn: 25528.2503642 test: 33687.8164707 best: 33687.8164707 (301) total: 772ms remaining: 24.8s
302: learn: 25518.2857348 test: 33674.8608084 best: 33674.8608084 (302) total: 775ms remaining: 24.8s
303: learn: 25509.4333088 test: 33665.3386249 best: 33665.3386249 (303) total: 777ms remaining: 24.8s
304: learn: 25457.0487427 test: 33640.2509366 best: 33640.2509366 (304) total: 779ms remaining: 24.8s
305: learn: 25447.1504254 test: 33627.3905207 best: 33627.3905207 (305) total: 781ms remaining: 24.8s
306: learn: 25437.0741359 test: 33613.9669269 best: 33613.9669269 (306) total: 784ms remaining: 24.7s
307: learn: 25427.0369473 test: 33600.5674464 best: 33600.5674464 (307) total: 786ms remaining: 24.7s
308: learn: 25383.3478874 test: 33574.7998911 best: 33574.7998911 (308) total: 788ms remaining: 24.7s
309: learn: 25354.9764891 test: 33557.0520331 best: 33557.0520331 (309) total: 790ms remaining: 24.7s
310: learn: 25345.6559323 test: 33544.3002394 best: 33544.3002394 (310) total: 792ms remaining: 24.7s
311: learn: 25300.3493757 test: 33507.0981481 best: 33507.0981481 (311) total: 795ms remaining: 24.7s
312: learn: 25255.2310681 test: 33465.0831114 best: 33465.0831114 (312) total: 797ms remaining: 24.7s
313: learn: 25248.2512746 test: 33460.0719611 best: 33460.0719611 (313) total: 799ms remaining: 24.6s
314: learn: 25241.2881565 test: 33455.0803395 best: 33455.0803395 (314) total: 802ms remaining: 24.6s
315: learn: 25233.7217795 test: 33445.0610617 best: 33445.0610617 (315) total: 804ms remaining: 24.6s
316: learn: 25224.8468744 test: 33435.3736319 best: 33435.3736319 (316) total: 806ms remaining: 24.6s
317: learn: 25217.1559330 test: 33427.7260188 best: 33427.7260188 (317) total: 810ms remaining: 24.7s
318: learn: 25208.3318719 test: 33418.0825583 best: 33418.0825583 (318) total: 812ms remaining: 24.6s
319: learn: 25198.6236469 test: 33405.0184300 best: 33405.0184300 (319) total: 814ms remaining: 24.6s
320: learn: 25189.8555607 test: 33395.6209985 best: 33395.6209985 (320) total: 816ms remaining: 24.6s
321: learn: 25179.9719055 test: 33382.3243987 best: 33382.3243987 (321) total: 819ms remaining: 24.6s
322: learn: 25170.1211508 test: 33369.0605851 best: 33369.0605851 (322) total: 821ms remaining: 24.6s
323: learn: 25161.4311372 test: 33359.7341334 best: 33359.7341334 (323) total: 823ms remaining: 24.6s
324: learn: 25151.8589326 test: 33346.8367896 best: 33346.8367896 (324) total: 825ms remaining: 24.6s
325: learn: 25142.3308981 test: 33333.9654931 best: 33333.9654931 (325) total: 827ms remaining: 24.6s
326: learn: 25133.7203152 test: 33324.5181000 best: 33324.5181000 (326) total: 830ms remaining: 24.5s
327: learn: 25073.0028151 test: 33269.3552070 best: 33269.3552070 (327) total: 832ms remaining: 24.5s
328: learn: 25064.4484404 test: 33259.9604695 best: 33259.9604695 (328) total: 834ms remaining: 24.5s
329: learn: 25024.8610873 test: 33212.1363801 best: 33212.1363801 (329) total: 837ms remaining: 24.5s
330: learn: 25016.3711955 test: 33202.7891788 best: 33202.7891788 (330) total: 839ms remaining: 24.5s
331: learn: 25007.9042165 test: 33193.6142134 best: 33193.6142134 (331) total: 841ms remaining: 24.5s
332: learn: 24946.7142122 test: 33133.6916221 best: 33133.6916221 (332) total: 843ms remaining: 24.5s
333: learn: 24938.2858292 test: 33124.4476708 best: 33124.4476708 (333) total: 846ms remaining: 24.5s
334: learn: 24929.6287097 test: 33115.8291691 best: 33115.8291691 (334) total: 848ms remaining: 24.5s
335: learn: 24920.2163926 test: 33103.7134984 best: 33103.7134984 (335) total: 850ms remaining: 24.4s
336: learn: 24888.3019135 test: 33081.1601525 best: 33081.1601525 (336) total: 852ms remaining: 24.4s
337: learn: 24870.0162915 test: 33056.5145546 best: 33056.5145546 (337) total: 855ms remaining: 24.4s
338: learn: 24861.7287025 test: 33047.3864293 best: 33047.3864293 (338) total: 857ms remaining: 24.4s
339: learn: 24853.4789404 test: 33038.2613223 best: 33038.2613223 (339) total: 859ms remaining: 24.4s
340: learn: 24845.2470896 test: 33029.3449450 best: 33029.3449450 (340) total: 861ms remaining: 24.4s
341: learn: 24838.0835252 test: 33019.6797808 best: 33019.6797808 (341) total: 864ms remaining: 24.4s
342: learn: 24829.9094488 test: 33010.6245614 best: 33010.6245614 (342) total: 866ms remaining: 24.4s
343: learn: 24821.7641166 test: 33001.5967799 best: 33001.5967799 (343) total: 869ms remaining: 24.4s
344: learn: 24813.6372150 test: 32992.7765617 best: 32992.7765617 (344) total: 872ms remaining: 24.4s
345: learn: 24805.5401286 test: 32983.8266466 best: 32983.8266466 (345) total: 874ms remaining: 24.4s
346: learn: 24762.5369575 test: 32950.9737514 best: 32950.9737514 (346) total: 877ms remaining: 24.4s
347: learn: 24747.1131234 test: 32929.5139500 best: 32929.5139500 (347) total: 879ms remaining: 24.4s
348: learn: 24729.4587674 test: 32905.5836741 best: 32905.5836741 (348) total: 881ms remaining: 24.4s
349: learn: 24721.4894646 test: 32896.7275532 best: 32896.7275532 (349) total: 883ms remaining: 24.4s
350: learn: 24703.9964034 test: 32872.9842721 best: 32872.9842721 (350) total: 886ms remaining: 24.3s
351: learn: 24681.0637519 test: 32854.1582391 best: 32854.1582391 (351) total: 888ms remaining: 24.3s
352: learn: 24673.1899051 test: 32845.3749020 best: 32845.3749020 (352) total: 890ms remaining: 24.3s
353: learn: 24665.3438207 test: 32836.6181343 best: 32836.6181343 (353) total: 892ms remaining: 24.3s
354: learn: 24642.6325706 test: 32819.6938887 best: 32819.6938887 (354) total: 894ms remaining: 24.3s
355: learn: 24620.0349999 test: 32807.2590718 best: 32807.2590718 (355) total: 896ms remaining: 24.3s
356: learn: 24577.4315852 test: 32786.4899220 best: 32786.4899220 (356) total: 898ms remaining: 24.3s
357: learn: 24562.6796802 test: 32765.8797349 best: 32765.8797349 (357) total: 900ms remaining: 24.3s
358: learn: 24554.9722371 test: 32757.2209001 best: 32757.2209001 (358) total: 903ms remaining: 24.2s
359: learn: 24540.3434585 test: 32736.7577154 best: 32736.7577154 (359) total: 905ms remaining: 24.2s
360: learn: 24532.6975569 test: 32728.1540634 best: 32728.1540634 (360) total: 907ms remaining: 24.2s
361: learn: 24469.4900659 test: 32668.9371233 best: 32668.9371233 (361) total: 909ms remaining: 24.2s
362: learn: 24461.8709051 test: 32660.5047128 best: 32660.5047128 (362) total: 912ms remaining: 24.2s
363: learn: 24446.4731451 test: 32637.4980173 best: 32637.4980173 (363) total: 914ms remaining: 24.2s
364: learn: 24439.0114103 test: 32627.9519687 best: 32627.9519687 (364) total: 916ms remaining: 24.2s
365: learn: 24384.1575263 test: 32579.7075110 best: 32579.7075110 (365) total: 918ms remaining: 24.2s
366: learn: 24369.9071752 test: 32559.8033821 best: 32559.8033821 (366) total: 920ms remaining: 24.2s
367: learn: 24353.2763833 test: 32538.2774502 best: 32538.2774502 (367) total: 923ms remaining: 24.1s
368: learn: 24339.1952823 test: 32518.5909497 best: 32518.5909497 (368) total: 925ms remaining: 24.1s
369: learn: 24331.7548732 test: 32508.7860529 best: 32508.7860529 (369) total: 927ms remaining: 24.1s
370: learn: 24278.5462643 test: 32455.0821157 best: 32455.0821157 (370) total: 929ms remaining: 24.1s
371: learn: 24226.6133897 test: 32402.7400321 best: 32402.7400321 (371) total: 931ms remaining: 24.1s
372: learn: 24219.2566938 test: 32394.4017849 best: 32394.4017849 (372) total: 934ms remaining: 24.1s
373: learn: 24213.2888953 test: 32386.2097446 best: 32386.2097446 (373) total: 936ms remaining: 24.1s
374: learn: 24198.4353965 test: 32381.4230345 best: 32381.4230345 (374) total: 938ms remaining: 24.1s
375: learn: 24173.7058299 test: 32356.1551213 best: 32356.1551213 (375) total: 940ms remaining: 24.1s
376: learn: 24166.6712944 test: 32347.5390971 best: 32347.5390971 (376) total: 942ms remaining: 24s
377: learn: 24128.9357055 test: 32312.9545627 best: 32312.9545627 (377) total: 945ms remaining: 24s
378: learn: 24097.8707253 test: 32288.4715821 best: 32288.4715821 (378) total: 947ms remaining: 24s
379: learn: 24084.2848129 test: 32269.4742747 best: 32269.4742747 (379) total: 949ms remaining: 24s
380: learn: 24063.5998841 test: 32254.3049679 best: 32254.3049679 (380) total: 951ms remaining: 24s
381: learn: 24055.6933860 test: 32245.1321485 best: 32245.1321485 (381) total: 953ms remaining: 24s
382: learn: 24040.8669959 test: 32226.0922755 best: 32226.0922755 (382) total: 955ms remaining: 24s
383: learn: 24020.1247933 test: 32210.5361228 best: 32210.5361228 (383) total: 958ms remaining: 24s
384: learn: 23985.5214788 test: 32198.2667438 best: 32198.2667438 (384) total: 960ms remaining: 24s
385: learn: 23932.8034270 test: 32155.0792974 best: 32155.0792974 (385) total: 962ms remaining: 24s
386: learn: 23884.4066206 test: 32114.8251224 best: 32114.8251224 (386) total: 964ms remaining: 23.9s
387: learn: 23866.5030443 test: 32105.1585824 best: 32105.1585824 (387) total: 966ms remaining: 23.9s
388: learn: 23846.5398749 test: 32090.2776465 best: 32090.2776465 (388) total: 968ms remaining: 23.9s
389: learn: 23828.5091334 test: 32080.5785200 best: 32080.5785200 (389) total: 971ms remaining: 23.9s
390: learn: 23793.9682479 test: 32059.7677744 best: 32059.7677744 (390) total: 973ms remaining: 23.9s
391: learn: 23762.1576282 test: 32032.0677869 best: 32032.0677869 (391) total: 975ms remaining: 23.9s
392: learn: 23728.0264023 test: 32006.2135334 best: 32006.2135334 (392) total: 977ms remaining: 23.9s
393: learn: 23720.2639784 test: 31996.4412780 best: 31996.4412780 (393) total: 980ms remaining: 23.9s
394: learn: 23713.3408333 test: 31987.1859445 best: 31987.1859445 (394) total: 982ms remaining: 23.9s
395: learn: 23694.1581890 test: 31972.9677698 best: 31972.9677698 (395) total: 984ms remaining: 23.9s
396: learn: 23675.4425407 test: 31959.4684851 best: 31959.4684851 (396) total: 986ms remaining: 23.9s
397: learn: 23651.0707235 test: 31936.4074156 best: 31936.4074156 (397) total: 988ms remaining: 23.8s
398: learn: 23599.1293200 test: 31891.1842106 best: 31891.1842106 (398) total: 991ms remaining: 23.8s
399: learn: 23587.9332218 test: 31873.1244038 best: 31873.1244038 (399) total: 993ms remaining: 23.8s
400: learn: 23574.5010982 test: 31855.4754609 best: 31855.4754609 (400) total: 995ms remaining: 23.8s
401: learn: 23557.7719096 test: 31846.3009461 best: 31846.3009461 (401) total: 999ms remaining: 23.8s
402: learn: 23552.3274571 test: 31838.6308699 best: 31838.6308699 (402) total: 1s remaining: 23.8s
403: learn: 23534.5358188 test: 31825.4768061 best: 31825.4768061 (403) total: 1s remaining: 23.8s
404: learn: 23508.6279552 test: 31800.7601343 best: 31800.7601343 (404) total: 1s remaining: 23.8s
405: learn: 23491.9238354 test: 31792.1029084 best: 31792.1029084 (405) total: 1.01s remaining: 23.8s
406: learn: 23484.5553058 test: 31782.9296751 best: 31782.9296751 (406) total: 1.01s remaining: 23.8s
407: learn: 23458.2353011 test: 31760.1272870 best: 31760.1272870 (407) total: 1.01s remaining: 23.8s
408: learn: 23452.9047793 test: 31752.7652867 best: 31752.7652867 (408) total: 1.01s remaining: 23.8s
409: learn: 23434.9642202 test: 31739.5278357 best: 31739.5278357 (409) total: 1.02s remaining: 23.8s
410: learn: 23419.7185777 test: 31731.5005433 best: 31731.5005433 (410) total: 1.02s remaining: 23.8s
411: learn: 23412.4861570 test: 31722.8632762 best: 31722.8632762 (411) total: 1.02s remaining: 23.7s
412: learn: 23398.9288365 test: 31723.0394377 best: 31722.8632762 (411) total: 1.02s remaining: 23.7s
413: learn: 23393.6550570 test: 31715.5577650 best: 31715.5577650 (413) total: 1.02s remaining: 23.7s
414: learn: 23382.5717119 test: 31710.4291288 best: 31710.4291288 (414) total: 1.03s remaining: 23.7s
415: learn: 23341.6368659 test: 31664.7599910 best: 31664.7599910 (415) total: 1.03s remaining: 23.7s
416: learn: 23320.6229985 test: 31651.6500474 best: 31651.6500474 (416) total: 1.03s remaining: 23.7s
417: learn: 23292.5280052 test: 31630.4319817 best: 31630.4319817 (417) total: 1.03s remaining: 23.7s
418: learn: 23287.2792903 test: 31622.9914098 best: 31622.9914098 (418) total: 1.03s remaining: 23.7s
419: learn: 23280.5471605 test: 31620.9223263 best: 31620.9223263 (419) total: 1.04s remaining: 23.7s
420: learn: 23275.3547809 test: 31613.5375337 best: 31613.5375337 (420) total: 1.04s remaining: 23.7s
421: learn: 23268.6553120 test: 31611.4756635 best: 31611.4756635 (421) total: 1.04s remaining: 23.6s
422: learn: 23220.4773102 test: 31563.8524252 best: 31563.8524252 (422) total: 1.04s remaining: 23.6s
423: learn: 23195.9728236 test: 31544.3178694 best: 31544.3178694 (423) total: 1.05s remaining: 23.6s
424: learn: 23183.8964171 test: 31526.3091544 best: 31526.3091544 (424) total: 1.05s remaining: 23.6s
425: learn: 23172.0444760 test: 31510.0999909 best: 31510.0999909 (425) total: 1.05s remaining: 23.6s
426: learn: 23165.4036406 test: 31500.3428787 best: 31500.3428787 (426) total: 1.05s remaining: 23.6s
427: learn: 23155.3747366 test: 31496.6846935 best: 31496.6846935 (427) total: 1.05s remaining: 23.6s
428: learn: 23138.8632130 test: 31484.9764566 best: 31484.9764566 (428) total: 1.06s remaining: 23.6s
429: learn: 23132.3218858 test: 31482.9474464 best: 31482.9474464 (429) total: 1.06s remaining: 23.6s
430: learn: 23122.2368980 test: 31479.4840483 best: 31479.4840483 (430) total: 1.06s remaining: 23.6s
431: learn: 23102.7292928 test: 31459.3203527 best: 31459.3203527 (431) total: 1.06s remaining: 23.5s
432: learn: 23066.1872843 test: 31433.1283204 best: 31433.1283204 (432) total: 1.06s remaining: 23.5s
433: learn: 23047.0370536 test: 31413.2825462 best: 31413.2825462 (433) total: 1.07s remaining: 23.5s
434: learn: 23001.4688930 test: 31372.9145078 best: 31372.9145078 (434) total: 1.07s remaining: 23.5s
435: learn: 22996.4998502 test: 31365.7547197 best: 31365.7547197 (435) total: 1.07s remaining: 23.5s
436: learn: 22960.9000396 test: 31334.4654320 best: 31334.4654320 (436) total: 1.07s remaining: 23.5s
437: learn: 22954.3333136 test: 31325.5681146 best: 31325.5681146 (437) total: 1.07s remaining: 23.5s
438: learn: 22938.6282721 test: 31314.0429427 best: 31314.0429427 (438) total: 1.08s remaining: 23.5s
439: learn: 22928.5308557 test: 31310.8577511 best: 31310.8577511 (439) total: 1.08s remaining: 23.5s
440: learn: 22893.9262982 test: 31280.4149254 best: 31280.4149254 (440) total: 1.08s remaining: 23.5s
441: learn: 22870.3221849 test: 31256.5273599 best: 31256.5273599 (441) total: 1.08s remaining: 23.5s
442: learn: 22860.9064630 test: 31244.1394387 best: 31244.1394387 (442) total: 1.09s remaining: 23.4s
443: learn: 22851.7918339 test: 31237.9812027 best: 31237.9812027 (443) total: 1.09s remaining: 23.4s
444: learn: 22848.1696672 test: 31237.8239984 best: 31237.8239984 (444) total: 1.09s remaining: 23.4s
445: learn: 22838.7366397 test: 31234.8254292 best: 31234.8254292 (445) total: 1.09s remaining: 23.4s
446: learn: 22828.9969818 test: 31219.2434170 best: 31219.2434170 (446) total: 1.09s remaining: 23.4s
447: learn: 22824.1621430 test: 31212.2342523 best: 31212.2342523 (447) total: 1.1s remaining: 23.4s
448: learn: 22783.2238971 test: 31173.7170736 best: 31173.7170736 (448) total: 1.1s remaining: 23.4s
449: learn: 22750.0608657 test: 31144.5898353 best: 31144.5898353 (449) total: 1.1s remaining: 23.4s
450: learn: 22739.2207633 test: 31129.4453428 best: 31129.4453428 (450) total: 1.1s remaining: 23.4s
451: learn: 22733.4093692 test: 31127.9072960 best: 31127.9072960 (451) total: 1.1s remaining: 23.4s
452: learn: 22723.7489054 test: 31125.0062317 best: 31125.0062317 (452) total: 1.11s remaining: 23.3s
453: learn: 22712.3153427 test: 31118.2432375 best: 31118.2432375 (453) total: 1.11s remaining: 23.3s
454: learn: 22706.0792154 test: 31117.2211402 best: 31117.2211402 (454) total: 1.11s remaining: 23.3s
455: learn: 22696.5841581 test: 31102.0540539 best: 31102.0540539 (455) total: 1.11s remaining: 23.3s
456: learn: 22691.8691198 test: 31095.1722406 best: 31095.1722406 (456) total: 1.12s remaining: 23.3s
457: learn: 22682.4319212 test: 31080.1013496 best: 31080.1013496 (457) total: 1.12s remaining: 23.3s
458: learn: 22676.2314675 test: 31071.3179134 best: 31071.3179134 (458) total: 1.12s remaining: 23.3s
459: learn: 22667.4132187 test: 31068.5383477 best: 31068.5383477 (459) total: 1.12s remaining: 23.3s
460: learn: 22662.5495849 test: 31060.9270218 best: 31060.9270218 (460) total: 1.13s remaining: 23.3s
461: learn: 22620.2386988 test: 31016.2766897 best: 31016.2766897 (461) total: 1.13s remaining: 23.3s
462: learn: 22603.1701488 test: 30998.0114208 best: 30998.0114208 (462) total: 1.13s remaining: 23.3s
463: learn: 22594.9304877 test: 30993.9646316 best: 30993.9646316 (463) total: 1.13s remaining: 23.3s
464: learn: 22564.3733634 test: 30970.8587772 best: 30970.8587772 (464) total: 1.13s remaining: 23.3s
465: learn: 22526.9080355 test: 30935.9030738 best: 30935.9030738 (465) total: 1.14s remaining: 23.2s
466: learn: 22522.4954231 test: 30930.6711395 best: 30930.6711395 (466) total: 1.14s remaining: 23.2s
467: learn: 22500.6913875 test: 30911.7966056 best: 30911.7966056 (467) total: 1.14s remaining: 23.2s
468: learn: 22496.7222900 test: 30906.1133237 best: 30906.1133237 (468) total: 1.14s remaining: 23.2s
469: learn: 22492.1717197 test: 30899.3933871 best: 30899.3933871 (469) total: 1.15s remaining: 23.2s
470: learn: 22486.4163296 test: 30898.3807122 best: 30898.3807122 (470) total: 1.15s remaining: 23.2s
471: learn: 22480.7353335 test: 30897.3354200 best: 30897.3354200 (471) total: 1.15s remaining: 23.2s
472: learn: 22474.5725371 test: 30896.5729825 best: 30896.5729825 (472) total: 1.15s remaining: 23.2s
473: learn: 22468.6352267 test: 30895.5859835 best: 30895.5859835 (473) total: 1.16s remaining: 23.2s
474: learn: 22434.0611984 test: 30858.6116300 best: 30858.6116300 (474) total: 1.16s remaining: 23.2s
475: learn: 22428.1670731 test: 30857.6356455 best: 30857.6356455 (475) total: 1.16s remaining: 23.2s
476: learn: 22424.8211812 test: 30857.4880807 best: 30857.4880807 (476) total: 1.16s remaining: 23.2s
477: learn: 22420.7020253 test: 30851.2330681 best: 30851.2330681 (477) total: 1.16s remaining: 23.2s
478: learn: 22417.7667837 test: 30849.0834656 best: 30849.0834656 (478) total: 1.17s remaining: 23.2s
479: learn: 22377.2024789 test: 30826.4212516 best: 30826.4212516 (479) total: 1.17s remaining: 23.2s
480: learn: 22371.3317998 test: 30817.9385845 best: 30817.9385845 (480) total: 1.17s remaining: 23.2s
481: learn: 22342.2263774 test: 30780.1929401 best: 30780.1929401 (481) total: 1.17s remaining: 23.2s
482: learn: 22300.0656126 test: 30755.6939939 best: 30755.6939939 (482) total: 1.17s remaining: 23.2s
483: learn: 22293.8588638 test: 30752.9717767 best: 30752.9717767 (483) total: 1.18s remaining: 23.1s
484: learn: 22289.4583538 test: 30746.3945707 best: 30746.3945707 (484) total: 1.18s remaining: 23.1s
485: learn: 22283.9320254 test: 30745.4046273 best: 30745.4046273 (485) total: 1.18s remaining: 23.1s
486: learn: 22262.4903690 test: 30727.4447444 best: 30727.4447444 (486) total: 1.18s remaining: 23.1s
487: learn: 22254.7179998 test: 30724.0359169 best: 30724.0359169 (487) total: 1.19s remaining: 23.1s
488: learn: 22250.3664963 test: 30717.4984633 best: 30717.4984633 (488) total: 1.19s remaining: 23.1s
489: learn: 22247.0441024 test: 30711.6805608 best: 30711.6805608 (489) total: 1.19s remaining: 23.1s
490: learn: 22240.4014498 test: 30709.2812006 best: 30709.2812006 (490) total: 1.19s remaining: 23.1s
491: learn: 22201.1812056 test: 30680.9468931 best: 30680.9468931 (491) total: 1.2s remaining: 23.1s
492: learn: 22196.8639338 test: 30674.5184753 best: 30674.5184753 (492) total: 1.2s remaining: 23.1s
493: learn: 22192.5549211 test: 30668.0172977 best: 30668.0172977 (493) total: 1.2s remaining: 23.1s
494: learn: 22188.2551975 test: 30661.5256363 best: 30661.5256363 (494) total: 1.2s remaining: 23.1s
495: learn: 22185.6869956 test: 30659.4155309 best: 30659.4155309 (495) total: 1.2s remaining: 23.1s
496: learn: 22181.9824679 test: 30653.9429691 best: 30653.9429691 (496) total: 1.21s remaining: 23.1s
497: learn: 22177.7048704 test: 30647.4690048 best: 30647.4690048 (497) total: 1.21s remaining: 23.1s
498: learn: 22164.5367658 test: 30631.5226524 best: 30631.5226524 (498) total: 1.21s remaining: 23.1s
499: learn: 22161.9935359 test: 30629.6047296 best: 30629.6047296 (499) total: 1.21s remaining: 23.1s
500: learn: 22159.4552728 test: 30627.6851177 best: 30627.6851177 (500) total: 1.22s remaining: 23s
501: learn: 22123.5924863 test: 30600.9703368 best: 30600.9703368 (501) total: 1.22s remaining: 23s
502: learn: 22110.8432911 test: 30580.2104369 best: 30580.2104369 (502) total: 1.22s remaining: 23s
503: learn: 22105.0913990 test: 30579.6941263 best: 30579.6941263 (503) total: 1.22s remaining: 23s
504: learn: 22101.5618650 test: 30574.2853377 best: 30574.2853377 (504) total: 1.22s remaining: 23s
505: learn: 22084.9841188 test: 30561.2848473 best: 30561.2848473 (505) total: 1.23s remaining: 23s
506: learn: 22050.2088004 test: 30530.6551721 best: 30530.6551721 (506) total: 1.23s remaining: 23s
507: learn: 22028.0384595 test: 30524.2687738 best: 30524.2687738 (507) total: 1.23s remaining: 23s
508: learn: 22023.9006975 test: 30517.6900696 best: 30517.6900696 (508) total: 1.23s remaining: 23s
509: learn: 22019.7327020 test: 30511.3184800 best: 30511.3184800 (509) total: 1.23s remaining: 23s
510: learn: 22017.2483407 test: 30509.3730937 best: 30509.3730937 (510) total: 1.24s remaining: 23s
511: learn: 22004.5083097 test: 30495.6528198 best: 30495.6528198 (511) total: 1.24s remaining: 23s
512: learn: 21976.7222713 test: 30476.3397253 best: 30476.3397253 (512) total: 1.24s remaining: 23s
513: learn: 21972.6870350 test: 30470.1284590 best: 30470.1284590 (513) total: 1.24s remaining: 23s
514: learn: 21942.2224613 test: 30456.6956516 best: 30456.6956516 (514) total: 1.25s remaining: 22.9s
515: learn: 21937.3144355 test: 30450.5930302 best: 30450.5930302 (515) total: 1.25s remaining: 22.9s
516: learn: 21933.1956560 test: 30444.2872928 best: 30444.2872928 (516) total: 1.25s remaining: 22.9s
517: learn: 21901.8006618 test: 30415.3124431 best: 30415.3124431 (517) total: 1.25s remaining: 22.9s
518: learn: 21897.0565292 test: 30410.8092869 best: 30410.8092869 (518) total: 1.25s remaining: 22.9s
519: learn: 21892.9671705 test: 30404.5979293 best: 30404.5979293 (519) total: 1.26s remaining: 22.9s
520: learn: 21889.3283199 test: 30399.2204886 best: 30399.2204886 (520) total: 1.26s remaining: 22.9s
521: learn: 21883.8174603 test: 30398.8327260 best: 30398.8327260 (521) total: 1.26s remaining: 22.9s
522: learn: 21875.7745892 test: 30385.8703794 best: 30385.8703794 (522) total: 1.26s remaining: 22.9s
523: learn: 21870.3000059 test: 30385.4872427 best: 30385.4872427 (523) total: 1.26s remaining: 22.9s
524: learn: 21866.2557179 test: 30379.3253554 best: 30379.3253554 (524) total: 1.27s remaining: 22.9s
525: learn: 21862.2201302 test: 30373.1729127 best: 30373.1729127 (525) total: 1.27s remaining: 22.9s
526: learn: 21824.7613818 test: 30344.7687823 best: 30344.7687823 (526) total: 1.27s remaining: 22.9s
527: learn: 21819.7951964 test: 30337.6222306 best: 30337.6222306 (527) total: 1.27s remaining: 22.9s
528: learn: 21796.9333329 test: 30312.7416015 best: 30312.7416015 (528) total: 1.28s remaining: 22.8s
529: learn: 21793.1322049 test: 30308.1811402 best: 30308.1811402 (529) total: 1.28s remaining: 22.8s
530: learn: 21788.2041901 test: 30306.7471041 best: 30306.7471041 (530) total: 1.28s remaining: 22.8s
531: learn: 21784.2169584 test: 30300.5758564 best: 30300.5758564 (531) total: 1.28s remaining: 22.8s
532: learn: 21780.2415027 test: 30294.4749571 best: 30294.4749571 (532) total: 1.28s remaining: 22.8s
533: learn: 21776.2715983 test: 30288.3215158 best: 30288.3215158 (533) total: 1.29s remaining: 22.8s
534: learn: 21772.0936487 test: 30284.0093710 best: 30284.0093710 (534) total: 1.29s remaining: 22.8s
535: learn: 21767.2055169 test: 30282.9367418 best: 30282.9367418 (535) total: 1.29s remaining: 22.8s
536: learn: 21762.2881498 test: 30282.0049512 best: 30282.0049512 (536) total: 1.29s remaining: 22.8s
537: learn: 21759.4887592 test: 30282.0310164 best: 30282.0049512 (536) total: 1.29s remaining: 22.8s
538: learn: 21757.1365940 test: 30280.6424519 best: 30280.6424519 (538) total: 1.3s remaining: 22.8s
539: learn: 21751.7301325 test: 30279.9249009 best: 30279.9249009 (539) total: 1.3s remaining: 22.8s
540: learn: 21747.8166992 test: 30273.8818246 best: 30273.8818246 (540) total: 1.3s remaining: 22.8s
541: learn: 21743.9768736 test: 30274.2421588 best: 30273.8818246 (540) total: 1.3s remaining: 22.8s
542: learn: 21740.0752133 test: 30268.1488826 best: 30268.1488826 (542) total: 1.31s remaining: 22.8s
543: learn: 21737.2248046 test: 30264.1175912 best: 30264.1175912 (543) total: 1.31s remaining: 22.7s
544: learn: 21732.7592089 test: 30264.2310231 best: 30264.1175912 (543) total: 1.31s remaining: 22.7s
545: learn: 21728.8811621 test: 30258.2196805 best: 30258.2196805 (545) total: 1.31s remaining: 22.7s
546: learn: 21725.0114452 test: 30252.2175611 best: 30252.2175611 (546) total: 1.31s remaining: 22.7s
547: learn: 21721.3477168 test: 30247.7881187 best: 30247.7881187 (547) total: 1.32s remaining: 22.7s
548: learn: 21719.0725507 test: 30245.6691105 best: 30245.6691105 (548) total: 1.32s remaining: 22.7s
549: learn: 21714.3401469 test: 30238.7413909 best: 30238.7413909 (549) total: 1.32s remaining: 22.7s
550: learn: 21712.0198888 test: 30237.1846753 best: 30237.1846753 (550) total: 1.32s remaining: 22.7s
551: learn: 21707.2947364 test: 30236.1081167 best: 30236.1081167 (551) total: 1.32s remaining: 22.7s
552: learn: 21703.4704903 test: 30230.1459973 best: 30230.1459973 (552) total: 1.33s remaining: 22.7s
553: learn: 21699.7114980 test: 30230.5376420 best: 30230.1459973 (552) total: 1.33s remaining: 22.7s
554: learn: 21696.4414219 test: 30225.2161951 best: 30225.2161951 (554) total: 1.33s remaining: 22.7s
555: learn: 21692.6383483 test: 30219.2733763 best: 30219.2733763 (555) total: 1.33s remaining: 22.7s
556: learn: 21688.8433745 test: 30213.3396842 best: 30213.3396842 (556) total: 1.34s remaining: 22.7s
557: learn: 21686.6113925 test: 30211.3661257 best: 30211.3661257 (557) total: 1.34s remaining: 22.6s
558: learn: 21682.8287463 test: 30205.4446299 best: 30205.4446299 (558) total: 1.34s remaining: 22.6s
559: learn: 21679.0541374 test: 30199.5322349 best: 30199.5322349 (559) total: 1.34s remaining: 22.6s
560: learn: 21676.2195635 test: 30198.6566824 best: 30198.6566824 (560) total: 1.34s remaining: 22.6s
561: learn: 21671.0576388 test: 30198.3292197 best: 30198.3292197 (561) total: 1.35s remaining: 22.6s
562: learn: 21667.3025365 test: 30192.4361613 best: 30192.4361613 (562) total: 1.35s remaining: 22.6s
563: learn: 21662.6925169 test: 30191.3591826 best: 30191.3591826 (563) total: 1.35s remaining: 22.6s
564: learn: 21659.1521003 test: 30187.0497998 best: 30187.0497998 (564) total: 1.35s remaining: 22.6s
565: learn: 21623.3566801 test: 30157.6210401 best: 30157.6210401 (565) total: 1.35s remaining: 22.6s
566: learn: 21601.2104328 test: 30150.4024246 best: 30150.4024246 (566) total: 1.36s remaining: 22.6s
567: learn: 21597.4996372 test: 30144.5414713 best: 30144.5414713 (567) total: 1.36s remaining: 22.6s
568: learn: 21593.7967502 test: 30138.6895444 best: 30138.6895444 (568) total: 1.36s remaining: 22.6s
569: learn: 21590.6335362 test: 30133.5245687 best: 30133.5245687 (569) total: 1.36s remaining: 22.6s
570: learn: 21586.9420234 test: 30127.6209171 best: 30127.6209171 (570) total: 1.36s remaining: 22.6s
571: learn: 21584.7653583 test: 30125.7230162 best: 30125.7230162 (571) total: 1.37s remaining: 22.5s
572: learn: 21581.2188680 test: 30122.9316122 best: 30122.9316122 (572) total: 1.37s remaining: 22.5s
573: learn: 21575.6718974 test: 30123.4701509 best: 30122.9316122 (572) total: 1.37s remaining: 22.5s
574: learn: 21571.9975366 test: 30117.6434317 best: 30117.6434317 (574) total: 1.38s remaining: 22.5s
575: learn: 21568.3376282 test: 30111.7724450 best: 30111.7724450 (575) total: 1.38s remaining: 22.5s
576: learn: 21564.6885323 test: 30105.9768628 best: 30105.9768628 (576) total: 1.38s remaining: 22.5s
577: learn: 21559.6256328 test: 30105.3470551 best: 30105.3470551 (577) total: 1.38s remaining: 22.5s
578: learn: 21555.9913937 test: 30099.5698974 best: 30099.5698974 (578) total: 1.38s remaining: 22.5s
579: learn: 21552.8850131 test: 30094.3193974 best: 30094.3193974 (579) total: 1.39s remaining: 22.5s
580: learn: 21548.5616350 test: 30094.2067519 best: 30094.2067519 (580) total: 1.39s remaining: 22.5s
581: learn: 21546.3590632 test: 30092.4233233 best: 30092.4233233 (581) total: 1.39s remaining: 22.5s
582: learn: 21542.7510271 test: 30086.6679853 best: 30086.6679853 (582) total: 1.39s remaining: 22.5s
583: learn: 21539.1506603 test: 30080.9215133 best: 30080.9215133 (583) total: 1.4s remaining: 22.5s
584: learn: 21537.0220412 test: 30079.0750855 best: 30079.0750855 (584) total: 1.4s remaining: 22.5s
585: learn: 21532.6352977 test: 30072.4737696 best: 30072.4737696 (585) total: 1.4s remaining: 22.5s
586: learn: 21528.2497386 test: 30071.3963921 best: 30071.3963921 (586) total: 1.4s remaining: 22.5s
587: learn: 21495.0501413 test: 30048.1422018 best: 30048.1422018 (587) total: 1.4s remaining: 22.5s
588: learn: 21491.4849284 test: 30042.4204443 best: 30042.4204443 (588) total: 1.41s remaining: 22.5s
589: learn: 21461.9570866 test: 30025.2216927 best: 30025.2216927 (589) total: 1.41s remaining: 22.5s
590: learn: 21457.5259652 test: 30026.0595819 best: 30025.2216927 (589) total: 1.41s remaining: 22.5s
591: learn: 21453.9830372 test: 30020.3531101 best: 30020.3531101 (591) total: 1.41s remaining: 22.5s
592: learn: 21451.8239435 test: 30018.8892276 best: 30018.8892276 (592) total: 1.42s remaining: 22.4s
593: learn: 21448.2928506 test: 30013.1945244 best: 30013.1945244 (593) total: 1.42s remaining: 22.4s
594: learn: 21443.8793739 test: 30013.5140768 best: 30013.1945244 (593) total: 1.42s remaining: 22.4s
595: learn: 21439.0631418 test: 30014.2950730 best: 30013.1945244 (593) total: 1.42s remaining: 22.4s
596: learn: 21434.7869148 test: 30007.7866299 best: 30007.7866299 (596) total: 1.42s remaining: 22.4s
597: learn: 21431.2864321 test: 30002.1240763 best: 30002.1240763 (597) total: 1.43s remaining: 22.4s
598: learn: 21413.9091655 test: 29986.7591878 best: 29986.7591878 (598) total: 1.43s remaining: 22.4s
599: learn: 21409.6599599 test: 29985.6607292 best: 29985.6607292 (599) total: 1.43s remaining: 22.4s
600: learn: 21385.0888399 test: 29965.4855686 best: 29965.4855686 (600) total: 1.43s remaining: 22.4s
601: learn: 21380.8841328 test: 29959.0593259 best: 29959.0593259 (601) total: 1.43s remaining: 22.4s
602: learn: 21348.0911006 test: 29923.6975984 best: 29923.6975984 (602) total: 1.44s remaining: 22.4s
603: learn: 21342.7887105 test: 29921.2354471 best: 29921.2354471 (603) total: 1.44s remaining: 22.4s
604: learn: 21340.7456300 test: 29919.5161405 best: 29919.5161405 (604) total: 1.44s remaining: 22.4s
605: learn: 21337.3064223 test: 29913.9074888 best: 29913.9074888 (605) total: 1.44s remaining: 22.4s
606: learn: 21333.1273056 test: 29912.7993514 best: 29912.7993514 (606) total: 1.45s remaining: 22.4s
607: learn: 21329.7249420 test: 29908.8960511 best: 29908.8960511 (607) total: 1.45s remaining: 22.4s
608: learn: 21326.3071464 test: 29903.3076182 best: 29903.3076182 (608) total: 1.45s remaining: 22.3s
609: learn: 21324.2366435 test: 29902.2958044 best: 29902.2958044 (609) total: 1.45s remaining: 22.3s
610: learn: 21320.8292742 test: 29896.7183057 best: 29896.7183057 (610) total: 1.45s remaining: 22.3s
611: learn: 21316.6931340 test: 29895.6153295 best: 29895.6153295 (611) total: 1.46s remaining: 22.3s
612: learn: 21286.3007438 test: 29875.2210701 best: 29875.2210701 (612) total: 1.46s remaining: 22.3s
613: learn: 21264.5915862 test: 29847.7642345 best: 29847.7642345 (613) total: 1.46s remaining: 22.3s
614: learn: 21261.2183941 test: 29842.2171038 best: 29842.2171038 (614) total: 1.46s remaining: 22.3s
615: learn: 21233.7907242 test: 29816.9742233 best: 29816.9742233 (615) total: 1.46s remaining: 22.3s
616: learn: 21231.7993246 test: 29815.3227557 best: 29815.3227557 (616) total: 1.47s remaining: 22.3s
617: learn: 21229.2847649 test: 29815.4974087 best: 29815.3227557 (616) total: 1.47s remaining: 22.3s
618: learn: 21203.4346384 test: 29797.0065514 best: 29797.0065514 (618) total: 1.47s remaining: 22.3s
619: learn: 21198.1440450 test: 29791.9508444 best: 29791.9508444 (619) total: 1.47s remaining: 22.3s
620: learn: 21174.5618396 test: 29786.0594882 best: 29786.0594882 (620) total: 1.48s remaining: 22.3s
621: learn: 21172.4387752 test: 29783.1897462 best: 29783.1897462 (621) total: 1.48s remaining: 22.3s
622: learn: 21167.4581537 test: 29782.7885559 best: 29782.7885559 (622) total: 1.48s remaining: 22.3s
623: learn: 21165.4932236 test: 29781.0987111 best: 29781.0987111 (623) total: 1.48s remaining: 22.3s
624: learn: 21148.2419963 test: 29766.3415758 best: 29766.3415758 (624) total: 1.48s remaining: 22.3s
625: learn: 21142.9288090 test: 29764.3485761 best: 29764.3485761 (625) total: 1.49s remaining: 22.3s
626: learn: 21139.6411090 test: 29758.9862258 best: 29758.9862258 (626) total: 1.49s remaining: 22.3s
627: learn: 21135.2985427 test: 29758.2581660 best: 29758.2581660 (627) total: 1.49s remaining: 22.3s
628: learn: 21131.9752238 test: 29758.3505757 best: 29758.2581660 (627) total: 1.49s remaining: 22.2s
629: learn: 21129.4117341 test: 29753.5319917 best: 29753.5319917 (629) total: 1.5s remaining: 22.2s
630: learn: 21125.6075893 test: 29747.7644904 best: 29747.7644904 (630) total: 1.5s remaining: 22.2s
631: learn: 21123.6892070 test: 29746.2762713 best: 29746.2762713 (631) total: 1.5s remaining: 22.2s
632: learn: 21119.9316381 test: 29746.1905953 best: 29746.1905953 (632) total: 1.5s remaining: 22.2s
633: learn: 21117.9298567 test: 29745.1115324 best: 29745.1115324 (633) total: 1.5s remaining: 22.2s
634: learn: 21115.1060584 test: 29740.6782579 best: 29740.6782579 (634) total: 1.51s remaining: 22.2s
635: learn: 21089.0580250 test: 29721.7442337 best: 29721.7442337 (635) total: 1.51s remaining: 22.2s
636: learn: 21085.7755856 test: 29721.8532854 best: 29721.7442337 (635) total: 1.51s remaining: 22.2s
637: learn: 21081.8438964 test: 29720.7399098 best: 29720.7399098 (637) total: 1.51s remaining: 22.2s
638: learn: 21078.6106854 test: 29715.3871080 best: 29715.3871080 (638) total: 1.51s remaining: 22.2s
639: learn: 21074.6099871 test: 29709.2088705 best: 29709.2088705 (639) total: 1.52s remaining: 22.2s
640: learn: 21066.5699107 test: 29694.7219822 best: 29694.7219822 (640) total: 1.52s remaining: 22.2s
641: learn: 21062.8930090 test: 29694.6397071 best: 29694.6397071 (641) total: 1.52s remaining: 22.2s
642: learn: 21059.0239615 test: 29688.5425647 best: 29688.5425647 (642) total: 1.52s remaining: 22.2s
643: learn: 21055.1503638 test: 29681.4550619 best: 29681.4550619 (643) total: 1.53s remaining: 22.2s
644: learn: 21023.1822921 test: 29656.2918240 best: 29656.2918240 (644) total: 1.53s remaining: 22.2s
645: learn: 21021.8106236 test: 29656.2453840 best: 29656.2453840 (645) total: 1.53s remaining: 22.2s
646: learn: 21019.9158276 test: 29654.7739605 best: 29654.7739605 (646) total: 1.53s remaining: 22.2s
647: learn: 21017.4464570 test: 29654.5781073 best: 29654.5781073 (647) total: 1.53s remaining: 22.2s
648: learn: 21013.6028202 test: 29653.4506820 best: 29653.4506820 (648) total: 1.54s remaining: 22.2s
649: learn: 21011.6814459 test: 29652.2277252 best: 29652.2277252 (649) total: 1.54s remaining: 22.2s
650: learn: 21008.0158971 test: 29646.6165844 best: 29646.6165844 (650) total: 1.54s remaining: 22.1s
651: learn: 21005.5417211 test: 29645.4070673 best: 29645.4070673 (651) total: 1.54s remaining: 22.1s
652: learn: 21002.0439339 test: 29640.1284019 best: 29640.1284019 (652) total: 1.55s remaining: 22.1s
653: learn: 20998.2518534 test: 29634.1072170 best: 29634.1072170 (653) total: 1.55s remaining: 22.1s
654: learn: 20994.8232366 test: 29628.3725016 best: 29628.3725016 (654) total: 1.55s remaining: 22.1s
655: learn: 20992.5653455 test: 29623.6952909 best: 29623.6952909 (655) total: 1.55s remaining: 22.1s
656: learn: 20989.1613100 test: 29621.6713380 best: 29621.6713380 (656) total: 1.55s remaining: 22.1s
657: learn: 20987.9681728 test: 29620.5923690 best: 29620.5923690 (657) total: 1.56s remaining: 22.1s
658: learn: 20985.9090560 test: 29620.3527605 best: 29620.3527605 (658) total: 1.56s remaining: 22.1s
659: learn: 20984.0519338 test: 29618.9077008 best: 29618.9077008 (659) total: 1.56s remaining: 22.1s
660: learn: 20981.8398478 test: 29618.9001872 best: 29618.9001872 (660) total: 1.56s remaining: 22.1s
661: learn: 20980.2930846 test: 29617.2526080 best: 29617.2526080 (661) total: 1.57s remaining: 22.1s
662: learn: 20978.4546956 test: 29615.7062204 best: 29615.7062204 (662) total: 1.57s remaining: 22.1s
663: learn: 20976.5377670 test: 29614.6478024 best: 29614.6478024 (663) total: 1.57s remaining: 22.1s
664: learn: 20974.6926578 test: 29612.2116160 best: 29612.2116160 (664) total: 1.57s remaining: 22.1s
665: learn: 20972.2630890 test: 29611.9774992 best: 29611.9774992 (665) total: 1.58s remaining: 22.1s
666: learn: 20970.4216609 test: 29610.5425862 best: 29610.5425862 (666) total: 1.58s remaining: 22.1s
667: learn: 20968.5444490 test: 29609.5138261 best: 29609.5138261 (667) total: 1.58s remaining: 22.1s
668: learn: 20966.2576034 test: 29609.2511717 best: 29609.2511717 (668) total: 1.58s remaining: 22.1s
669: learn: 20964.2297486 test: 29608.3150129 best: 29608.3150129 (669) total: 1.58s remaining: 22.1s
670: learn: 20960.5036482 test: 29601.3623838 best: 29601.3623838 (670) total: 1.59s remaining: 22.1s
671: learn: 20936.4013704 test: 29583.7730158 best: 29583.7730158 (671) total: 1.59s remaining: 22.1s
672: learn: 20934.5738975 test: 29582.3745709 best: 29582.3745709 (672) total: 1.59s remaining: 22.1s
673: learn: 20931.0295887 test: 29576.8953363 best: 29576.8953363 (673) total: 1.59s remaining: 22.1s
674: learn: 20928.8255348 test: 29572.3503303 best: 29572.3503303 (674) total: 1.6s remaining: 22.1s
675: learn: 20925.2975285 test: 29566.8914491 best: 29566.8914491 (675) total: 1.6s remaining: 22.1s
676: learn: 20923.3914442 test: 29564.6846358 best: 29564.6846358 (676) total: 1.6s remaining: 22s
677: learn: 20921.4439304 test: 29562.1269379 best: 29562.1269379 (677) total: 1.6s remaining: 22s
678: learn: 20918.0900584 test: 29562.6301363 best: 29562.1269379 (677) total: 1.6s remaining: 22s
679: learn: 20916.2194647 test: 29561.6110877 best: 29561.6110877 (679) total: 1.61s remaining: 22s
680: learn: 20913.9461695 test: 29561.7463802 best: 29561.6110877 (679) total: 1.61s remaining: 22s
681: learn: 20910.3015422 test: 29555.8891647 best: 29555.8891647 (681) total: 1.61s remaining: 22s
682: learn: 20908.4903903 test: 29554.1609193 best: 29554.1609193 (682) total: 1.61s remaining: 22s
683: learn: 20906.6441713 test: 29553.2223334 best: 29553.2223334 (683) total: 1.62s remaining: 22s
684: learn: 20886.1585258 test: 29545.3832614 best: 29545.3832614 (684) total: 1.62s remaining: 22s
685: learn: 20884.3064063 test: 29544.3488636 best: 29544.3488636 (685) total: 1.62s remaining: 22s
686: learn: 20881.9889269 test: 29539.8607637 best: 29539.8607637 (686) total: 1.62s remaining: 22s
687: learn: 20878.9646453 test: 29534.7019834 best: 29534.7019834 (687) total: 1.63s remaining: 22s
688: learn: 20875.3735908 test: 29527.8751712 best: 29527.8751712 (688) total: 1.63s remaining: 22s
689: learn: 20871.7860788 test: 29522.0873730 best: 29522.0873730 (689) total: 1.63s remaining: 22s
690: learn: 20869.5731011 test: 29521.9485809 best: 29521.9485809 (690) total: 1.63s remaining: 22s
691: learn: 20867.6234446 test: 29521.0837790 best: 29521.0837790 (691) total: 1.63s remaining: 22s
692: learn: 20865.3033139 test: 29521.0850896 best: 29521.0837790 (691) total: 1.64s remaining: 22s
693: learn: 20863.0897913 test: 29517.7185648 best: 29517.7185648 (693) total: 1.64s remaining: 22s
694: learn: 20841.1919854 test: 29503.1690430 best: 29503.1690430 (694) total: 1.64s remaining: 22s
695: learn: 20838.0367101 test: 29497.7406936 best: 29497.7406936 (695) total: 1.64s remaining: 22s
696: learn: 20835.7488832 test: 29496.5445614 best: 29496.5445614 (696) total: 1.65s remaining: 22s
697: learn: 20833.9497514 test: 29495.4832130 best: 29495.4832130 (697) total: 1.65s remaining: 22s
698: learn: 20831.7893037 test: 29495.6337663 best: 29495.4832130 (697) total: 1.65s remaining: 22s
699: learn: 20829.9726082 test: 29494.3739279 best: 29494.3739279 (699) total: 1.65s remaining: 22s
700: learn: 20826.9828252 test: 29489.2570368 best: 29489.2570368 (700) total: 1.65s remaining: 21.9s
701: learn: 20825.5330082 test: 29487.7123243 best: 29487.7123243 (701) total: 1.66s remaining: 21.9s
702: learn: 20821.5800708 test: 29481.2130841 best: 29481.2130841 (702) total: 1.66s remaining: 21.9s
703: learn: 20818.0769391 test: 29475.5207316 best: 29475.5207316 (703) total: 1.66s remaining: 21.9s
704: learn: 20805.6654626 test: 29463.1863651 best: 29463.1863651 (704) total: 1.66s remaining: 21.9s
705: learn: 20802.3512339 test: 29457.3265979 best: 29457.3265979 (705) total: 1.67s remaining: 21.9s
706: learn: 20798.9998610 test: 29452.0743872 best: 29452.0743872 (706) total: 1.67s remaining: 21.9s
707: learn: 20797.5569369 test: 29449.1579153 best: 29449.1579153 (707) total: 1.67s remaining: 21.9s
708: learn: 20783.9956743 test: 29437.0082606 best: 29437.0082606 (708) total: 1.67s remaining: 21.9s
709: learn: 20782.3924614 test: 29435.9475876 best: 29435.9475876 (709) total: 1.67s remaining: 21.9s
710: learn: 20779.3358553 test: 29430.6393473 best: 29430.6393473 (710) total: 1.68s remaining: 21.9s
711: learn: 20777.7595528 test: 29428.9449088 best: 29428.9449088 (711) total: 1.68s remaining: 21.9s
712: learn: 20774.8192125 test: 29423.6630809 best: 29423.6630809 (712) total: 1.68s remaining: 21.9s
713: learn: 20771.6873564 test: 29418.2929274 best: 29418.2929274 (713) total: 1.68s remaining: 21.9s
714: learn: 20769.4900980 test: 29413.9929542 best: 29413.9929542 (714) total: 1.68s remaining: 21.9s
715: learn: 20766.2001938 test: 29408.8193586 best: 29408.8193586 (715) total: 1.69s remaining: 21.9s
716: learn: 20764.4436348 test: 29407.9378114 best: 29407.9378114 (716) total: 1.69s remaining: 21.9s
717: learn: 20762.9345002 test: 29406.0930421 best: 29406.0930421 (717) total: 1.69s remaining: 21.9s
718: learn: 20761.2301744 test: 29404.8619094 best: 29404.8619094 (718) total: 1.69s remaining: 21.9s
719: learn: 20758.3383288 test: 29399.6270615 best: 29399.6270615 (719) total: 1.7s remaining: 21.9s
720: learn: 20733.8518069 test: 29388.5593207 best: 29388.5593207 (720) total: 1.7s remaining: 21.8s
721: learn: 20730.4712050 test: 29383.0267782 best: 29383.0267782 (721) total: 1.7s remaining: 21.8s
722: learn: 20701.0000466 test: 29360.3749615 best: 29360.3749615 (722) total: 1.7s remaining: 21.8s
723: learn: 20699.6267794 test: 29359.1080228 best: 29359.1080228 (723) total: 1.7s remaining: 21.8s
724: learn: 20696.6625456 test: 29353.8983143 best: 29353.8983143 (724) total: 1.71s remaining: 21.8s
725: learn: 20693.5228756 test: 29348.5864446 best: 29348.5864446 (725) total: 1.71s remaining: 21.8s
726: learn: 20690.8335979 test: 29344.6162540 best: 29344.6162540 (726) total: 1.71s remaining: 21.8s
727: learn: 20688.0034457 test: 29339.8998822 best: 29339.8998822 (727) total: 1.71s remaining: 21.8s
728: learn: 20684.6692649 test: 29334.4231228 best: 29334.4231228 (728) total: 1.71s remaining: 21.8s
729: learn: 20681.2876869 test: 29329.0512585 best: 29329.0512585 (729) total: 1.72s remaining: 21.8s
730: learn: 20664.0068619 test: 29322.0696798 best: 29322.0696798 (730) total: 1.72s remaining: 21.8s
731: learn: 20654.4978055 test: 29317.4965057 best: 29317.4965057 (731) total: 1.72s remaining: 21.8s
732: learn: 20650.9528604 test: 29317.2700955 best: 29317.2700955 (732) total: 1.72s remaining: 21.8s
733: learn: 20649.6497553 test: 29317.0124530 best: 29317.0124530 (733) total: 1.73s remaining: 21.8s
734: learn: 20646.4595429 test: 29311.9552064 best: 29311.9552064 (734) total: 1.73s remaining: 21.8s
735: learn: 20644.1799732 test: 29307.5315106 best: 29307.5315106 (735) total: 1.73s remaining: 21.8s
736: learn: 20641.5032366 test: 29302.9124091 best: 29302.9124091 (736) total: 1.73s remaining: 21.8s
737: learn: 20640.3231095 test: 29303.1030664 best: 29302.9124091 (736) total: 1.73s remaining: 21.8s
738: learn: 20637.3532204 test: 29297.7839397 best: 29297.7839397 (738) total: 1.74s remaining: 21.8s
739: learn: 20636.0505181 test: 29294.9557103 best: 29294.9557103 (739) total: 1.74s remaining: 21.8s
740: learn: 20633.2683481 test: 29290.1195487 best: 29290.1195487 (740) total: 1.74s remaining: 21.7s
741: learn: 20629.4388772 test: 29285.6565731 best: 29285.6565731 (741) total: 1.74s remaining: 21.7s
742: learn: 20628.1018732 test: 29285.5552891 best: 29285.5552891 (742) total: 1.74s remaining: 21.7s
743: learn: 20626.0603289 test: 29280.9221961 best: 29280.9221961 (743) total: 1.75s remaining: 21.7s
744: learn: 20606.1117583 test: 29266.7189269 best: 29266.7189269 (744) total: 1.75s remaining: 21.7s
745: learn: 20602.6154353 test: 29260.6890319 best: 29260.6890319 (745) total: 1.75s remaining: 21.7s
746: learn: 20600.3479538 test: 29256.4103026 best: 29256.4103026 (746) total: 1.75s remaining: 21.7s
747: learn: 20584.2184509 test: 29247.1628097 best: 29247.1628097 (747) total: 1.76s remaining: 21.7s
748: learn: 20581.3000235 test: 29241.9068008 best: 29241.9068008 (748) total: 1.76s remaining: 21.7s
749: learn: 20578.2742961 test: 29238.2003291 best: 29238.2003291 (749) total: 1.76s remaining: 21.7s
750: learn: 20575.8567749 test: 29234.4163364 best: 29234.4163364 (750) total: 1.76s remaining: 21.7s
751: learn: 20573.6408482 test: 29230.1536616 best: 29230.1536616 (751) total: 1.76s remaining: 21.7s
752: learn: 20544.3564091 test: 29207.2511714 best: 29207.2511714 (752) total: 1.77s remaining: 21.7s
753: learn: 20542.3585130 test: 29202.7334681 best: 29202.7334681 (753) total: 1.77s remaining: 21.7s
754: learn: 20539.7418681 test: 29202.2451244 best: 29202.2451244 (754) total: 1.77s remaining: 21.7s
755: learn: 20538.5716343 test: 29199.1092983 best: 29199.1092983 (755) total: 1.77s remaining: 21.7s
756: learn: 20536.3752289 test: 29194.8682531 best: 29194.8682531 (756) total: 1.78s remaining: 21.7s
757: learn: 20534.2348050 test: 29194.4898507 best: 29194.4898507 (757) total: 1.78s remaining: 21.7s
758: learn: 20532.2552894 test: 29190.0115582 best: 29190.0115582 (758) total: 1.78s remaining: 21.7s
759: learn: 20530.9655796 test: 29188.9744853 best: 29188.9744853 (759) total: 1.78s remaining: 21.7s
760: learn: 20529.5591636 test: 29187.2765814 best: 29187.2765814 (760) total: 1.78s remaining: 21.7s
761: learn: 20505.1549986 test: 29176.1828526 best: 29176.1828526 (761) total: 1.79s remaining: 21.7s
762: learn: 20502.5634835 test: 29175.7146965 best: 29175.7146965 (762) total: 1.79s remaining: 21.7s
763: learn: 20499.8161407 test: 29170.7791202 best: 29170.7791202 (763) total: 1.79s remaining: 21.7s
764: learn: 20473.4442146 test: 29153.7652940 best: 29153.7652940 (764) total: 1.79s remaining: 21.7s
765: learn: 20471.4298637 test: 29149.8486865 best: 29149.8486865 (765) total: 1.79s remaining: 21.6s
766: learn: 20468.8674583 test: 29148.7495668 best: 29148.7495668 (766) total: 1.8s remaining: 21.6s
767: learn: 20466.2833167 test: 29144.1438262 best: 29144.1438262 (767) total: 1.8s remaining: 21.6s
768: learn: 20464.0426162 test: 29140.2566519 best: 29140.2566519 (768) total: 1.8s remaining: 21.6s
769: learn: 20461.3267401 test: 29135.3649278 best: 29135.3649278 (769) total: 1.8s remaining: 21.6s
770: learn: 20459.0654785 test: 29131.2466360 best: 29131.2466360 (770) total: 1.81s remaining: 21.6s
771: learn: 20456.8412273 test: 29127.3853506 best: 29127.3853506 (771) total: 1.81s remaining: 21.6s
772: learn: 20455.2322740 test: 29126.7522116 best: 29126.7522116 (772) total: 1.81s remaining: 21.6s
773: learn: 20453.2606796 test: 29122.7892617 best: 29122.7892617 (773) total: 1.81s remaining: 21.6s
774: learn: 20451.3347292 test: 29118.4354327 best: 29118.4354327 (774) total: 1.81s remaining: 21.6s
775: learn: 20433.1955063 test: 29113.1383081 best: 29113.1383081 (775) total: 1.82s remaining: 21.6s
776: learn: 20430.7540552 test: 29113.0799214 best: 29113.0799214 (776) total: 1.82s remaining: 21.6s
777: learn: 20428.8533160 test: 29111.6777566 best: 29111.6777566 (777) total: 1.82s remaining: 21.6s
778: learn: 20426.1783472 test: 29106.8466333 best: 29106.8466333 (778) total: 1.82s remaining: 21.6s
779: learn: 20423.9415252 test: 29103.2726334 best: 29103.2726334 (779) total: 1.83s remaining: 21.6s
780: learn: 20422.3303225 test: 29102.3445893 best: 29102.3445893 (780) total: 1.83s remaining: 21.6s
781: learn: 20420.6435178 test: 29101.7276936 best: 29101.7276936 (781) total: 1.83s remaining: 21.6s
782: learn: 20392.9298421 test: 29089.7525831 best: 29089.7525831 (782) total: 1.83s remaining: 21.6s
783: learn: 20390.1714443 test: 29084.7092479 best: 29084.7092479 (783) total: 1.83s remaining: 21.6s
784: learn: 20388.7395713 test: 29084.7241747 best: 29084.7092479 (783) total: 1.84s remaining: 21.6s
785: learn: 20387.0866250 test: 29084.5883805 best: 29084.5883805 (785) total: 1.84s remaining: 21.6s
786: learn: 20384.0500548 test: 29079.4539673 best: 29079.4539673 (786) total: 1.84s remaining: 21.6s
787: learn: 20379.4162304 test: 29070.6870040 best: 29070.6870040 (787) total: 1.84s remaining: 21.6s
788: learn: 20376.2715992 test: 29070.8556879 best: 29070.6870040 (787) total: 1.84s remaining: 21.5s
789: learn: 20374.7754742 test: 29070.2445855 best: 29070.2445855 (789) total: 1.85s remaining: 21.5s
790: learn: 20373.0481381 test: 29067.2093535 best: 29067.2093535 (790) total: 1.85s remaining: 21.5s
791: learn: 20371.3130054 test: 29063.9207313 best: 29063.9207313 (791) total: 1.85s remaining: 21.5s
792: learn: 20368.3892443 test: 29059.9848073 best: 29059.9848073 (792) total: 1.85s remaining: 21.5s
793: learn: 20365.7841685 test: 29055.2386779 best: 29055.2386779 (793) total: 1.86s remaining: 21.5s
794: learn: 20363.9238276 test: 29051.0563052 best: 29051.0563052 (794) total: 1.86s remaining: 21.5s
795: learn: 20348.5602153 test: 29046.1602941 best: 29046.1602941 (795) total: 1.86s remaining: 21.5s
796: learn: 20346.7180687 test: 29044.9528001 best: 29044.9528001 (796) total: 1.87s remaining: 21.5s
797: learn: 20345.4987916 test: 29044.0687721 best: 29044.0687721 (797) total: 1.87s remaining: 21.5s
798: learn: 20342.5191073 test: 29043.1526753 best: 29043.1526753 (798) total: 1.87s remaining: 21.5s
799: learn: 20340.8023967 test: 29039.6026541 best: 29039.6026541 (799) total: 1.87s remaining: 21.5s
800: learn: 20339.4190502 test: 29038.6945639 best: 29038.6945639 (800) total: 1.88s remaining: 21.5s
801: learn: 20337.6259844 test: 29038.8559381 best: 29038.6945639 (800) total: 1.88s remaining: 21.5s
802: learn: 20319.2043082 test: 29032.3441898 best: 29032.3441898 (802) total: 1.88s remaining: 21.5s
803: learn: 20317.1881400 test: 29028.3550363 best: 29028.3550363 (803) total: 1.88s remaining: 21.5s
804: learn: 20315.9017106 test: 29027.2282243 best: 29027.2282243 (804) total: 1.88s remaining: 21.5s
805: learn: 20313.3637089 test: 29022.5622949 best: 29022.5622949 (805) total: 1.89s remaining: 21.5s
806: learn: 20312.1512210 test: 29021.6236294 best: 29021.6236294 (806) total: 1.89s remaining: 21.5s
807: learn: 20308.9496016 test: 29021.4802824 best: 29021.4802824 (807) total: 1.89s remaining: 21.5s
808: learn: 20307.6206088 test: 29020.5663463 best: 29020.5663463 (808) total: 1.89s remaining: 21.5s
809: learn: 20306.4303520 test: 29019.6974373 best: 29019.6974373 (809) total: 1.89s remaining: 21.5s
810: learn: 20303.4627123 test: 29014.2715038 best: 29014.2715038 (810) total: 1.9s remaining: 21.5s
811: learn: 20285.8771716 test: 29009.2496467 best: 29009.2496467 (811) total: 1.9s remaining: 21.5s
812: learn: 20267.6528309 test: 29004.7179735 best: 29004.7179735 (812) total: 1.9s remaining: 21.5s
813: learn: 20266.5972501 test: 29004.7175397 best: 29004.7175397 (813) total: 1.9s remaining: 21.5s
814: learn: 20265.4243687 test: 29003.7963870 best: 29003.7963870 (814) total: 1.91s remaining: 21.5s
815: learn: 20260.8107106 test: 28994.4572884 best: 28994.4572884 (815) total: 1.91s remaining: 21.5s
816: learn: 20259.6256093 test: 28993.5550753 best: 28993.5550753 (816) total: 1.91s remaining: 21.5s
817: learn: 20255.5747297 test: 28987.9720876 best: 28987.9720876 (817) total: 1.91s remaining: 21.5s
818: learn: 20248.8198314 test: 28989.4927368 best: 28987.9720876 (817) total: 1.91s remaining: 21.5s
819: learn: 20245.9381937 test: 28984.5449374 best: 28984.5449374 (819) total: 1.92s remaining: 21.5s
820: learn: 20244.9039477 test: 28984.5706224 best: 28984.5449374 (819) total: 1.92s remaining: 21.5s
821: learn: 20243.4371178 test: 28983.6492230 best: 28983.6492230 (821) total: 1.92s remaining: 21.5s
822: learn: 20241.8632266 test: 28983.1151182 best: 28983.1151182 (822) total: 1.92s remaining: 21.4s
823: learn: 20240.3495060 test: 28982.0421209 best: 28982.0421209 (823) total: 1.93s remaining: 21.4s
824: learn: 20222.3647974 test: 28974.2995457 best: 28974.2995457 (824) total: 1.93s remaining: 21.4s
825: learn: 20220.8448750 test: 28973.4789725 best: 28973.4789725 (825) total: 1.93s remaining: 21.4s
826: learn: 20219.3306884 test: 28972.6244056 best: 28972.6244056 (826) total: 1.93s remaining: 21.4s
827: learn: 20217.5637548 test: 28971.2801356 best: 28971.2801356 (827) total: 1.93s remaining: 21.4s
828: learn: 20215.8387284 test: 28967.6128292 best: 28967.6128292 (828) total: 1.94s remaining: 21.4s
829: learn: 20214.2043328 test: 28965.4861576 best: 28965.4861576 (829) total: 1.94s remaining: 21.4s
830: learn: 20211.6662569 test: 28959.4347046 best: 28959.4347046 (830) total: 1.94s remaining: 21.4s
831: learn: 20210.6034573 test: 28959.4368442 best: 28959.4347046 (830) total: 1.94s remaining: 21.4s
832: learn: 20196.1205409 test: 28954.9731843 best: 28954.9731843 (832) total: 1.95s remaining: 21.4s
833: learn: 20194.3347165 test: 28954.1229659 best: 28954.1229659 (833) total: 1.95s remaining: 21.4s
834: learn: 20192.8159940 test: 28953.1835101 best: 28953.1835101 (834) total: 1.95s remaining: 21.4s
835: learn: 20191.1208199 test: 28949.9831669 best: 28949.9831669 (835) total: 1.95s remaining: 21.4s
836: learn: 20174.2612251 test: 28939.3241871 best: 28939.3241871 (836) total: 1.96s remaining: 21.4s
837: learn: 20149.9396229 test: 28930.0296842 best: 28930.0296842 (837) total: 1.96s remaining: 21.4s
838: learn: 20147.0995627 test: 28924.8203648 best: 28924.8203648 (838) total: 1.96s remaining: 21.4s
839: learn: 20138.4306553 test: 28920.6762871 best: 28920.6762871 (839) total: 1.96s remaining: 21.4s
840: learn: 20137.4167206 test: 28920.7357465 best: 28920.6762871 (839) total: 1.96s remaining: 21.4s
841: learn: 20135.0627761 test: 28916.6784860 best: 28916.6784860 (841) total: 1.97s remaining: 21.4s
842: learn: 20133.0490733 test: 28912.9259904 best: 28912.9259904 (842) total: 1.97s remaining: 21.4s
843: learn: 20130.7164251 test: 28912.6848431 best: 28912.6848431 (843) total: 1.97s remaining: 21.4s
844: learn: 20128.4275905 test: 28909.3597795 best: 28909.3597795 (844) total: 1.97s remaining: 21.4s
845: learn: 20125.2682809 test: 28904.7250252 best: 28904.7250252 (845) total: 1.98s remaining: 21.4s
846: learn: 20123.5717690 test: 28903.3546688 best: 28903.3546688 (846) total: 1.98s remaining: 21.4s
847: learn: 20115.8111797 test: 28899.4107871 best: 28899.4107871 (847) total: 1.98s remaining: 21.4s
848: learn: 20114.6778428 test: 28898.5557230 best: 28898.5557230 (848) total: 1.98s remaining: 21.4s
849: learn: 20086.9941024 test: 28879.1478268 best: 28879.1478268 (849) total: 1.98s remaining: 21.4s
850: learn: 20085.9495819 test: 28878.1271828 best: 28878.1271828 (850) total: 1.99s remaining: 21.4s
851: learn: 20084.8325248 test: 28878.3944063 best: 28878.1271828 (850) total: 1.99s remaining: 21.3s
852: learn: 20083.6976161 test: 28877.5173066 best: 28877.5173066 (852) total: 1.99s remaining: 21.3s
853: learn: 20070.3002185 test: 28870.9079724 best: 28870.9079724 (853) total: 1.99s remaining: 21.3s
854: learn: 20068.8419081 test: 28869.0947697 best: 28869.0947697 (854) total: 1.99s remaining: 21.3s
855: learn: 20066.6053949 test: 28865.1123240 best: 28865.1123240 (855) total: 2s remaining: 21.3s
856: learn: 20064.8822836 test: 28861.4893847 best: 28861.4893847 (856) total: 2s remaining: 21.3s
857: learn: 20063.4339177 test: 28860.6581328 best: 28860.6581328 (857) total: 2s remaining: 21.3s
858: learn: 20061.9872118 test: 28859.8437916 best: 28859.8437916 (858) total: 2s remaining: 21.3s
859: learn: 20060.3086119 test: 28858.4614738 best: 28858.4614738 (859) total: 2s remaining: 21.3s
860: learn: 20056.8129927 test: 28854.3912194 best: 28854.3912194 (860) total: 2.01s remaining: 21.3s
861: learn: 20047.2544412 test: 28852.0305793 best: 28852.0305793 (861) total: 2.01s remaining: 21.3s
862: learn: 20046.1750651 test: 28852.2228784 best: 28852.0305793 (861) total: 2.01s remaining: 21.3s
863: learn: 20044.3721748 test: 28850.1983360 best: 28850.1983360 (863) total: 2.01s remaining: 21.3s
864: learn: 20018.6899225 test: 28830.0215171 best: 28830.0215171 (864) total: 2.02s remaining: 21.3s
865: learn: 19996.2500043 test: 28820.6875039 best: 28820.6875039 (865) total: 2.02s remaining: 21.3s
866: learn: 19994.7171852 test: 28820.6488567 best: 28820.6488567 (866) total: 2.02s remaining: 21.3s
867: learn: 19993.5021811 test: 28819.4166609 best: 28819.4166609 (867) total: 2.02s remaining: 21.3s
868: learn: 19992.3896438 test: 28818.4396775 best: 28818.4396775 (868) total: 2.02s remaining: 21.3s
869: learn: 19991.3131816 test: 28817.6677861 best: 28817.6677861 (869) total: 2.03s remaining: 21.3s
870: learn: 19989.8904563 test: 28816.8596435 best: 28816.8596435 (870) total: 2.03s remaining: 21.3s
871: learn: 19967.0972791 test: 28811.5010462 best: 28811.5010462 (871) total: 2.03s remaining: 21.3s
872: learn: 19963.1399510 test: 28809.4081928 best: 28809.4081928 (872) total: 2.03s remaining: 21.3s
873: learn: 19962.2769872 test: 28809.2680995 best: 28809.2680995 (873) total: 2.04s remaining: 21.3s
874: learn: 19960.7966864 test: 28809.1974920 best: 28809.1974920 (874) total: 2.04s remaining: 21.3s
875: learn: 19958.3401274 test: 28809.4190115 best: 28809.1974920 (874) total: 2.04s remaining: 21.3s
876: learn: 19956.9785069 test: 28808.5525646 best: 28808.5525646 (876) total: 2.04s remaining: 21.3s
877: learn: 19940.2252542 test: 28801.7202979 best: 28801.7202979 (877) total: 2.04s remaining: 21.2s
878: learn: 19938.8816483 test: 28800.8900657 best: 28800.8900657 (878) total: 2.05s remaining: 21.2s
879: learn: 19937.7972271 test: 28799.7447055 best: 28799.7447055 (879) total: 2.05s remaining: 21.2s
880: learn: 19936.4574992 test: 28797.2156761 best: 28797.2156761 (880) total: 2.05s remaining: 21.2s
881: learn: 19910.4743443 test: 28771.4271384 best: 28771.4271384 (881) total: 2.05s remaining: 21.2s
882: learn: 19909.0745222 test: 28769.7294077 best: 28769.7294077 (882) total: 2.06s remaining: 21.2s
883: learn: 19907.8120285 test: 28767.6355047 best: 28767.6355047 (883) total: 2.06s remaining: 21.2s
884: learn: 19905.7561801 test: 28767.6316090 best: 28767.6316090 (884) total: 2.06s remaining: 21.2s
885: learn: 19904.6293887 test: 28766.9451780 best: 28766.9451780 (885) total: 2.06s remaining: 21.2s
886: learn: 19903.2383237 test: 28765.2556643 best: 28765.2556643 (886) total: 2.06s remaining: 21.2s
887: learn: 19900.1325598 test: 28758.1934216 best: 28758.1934216 (887) total: 2.07s remaining: 21.2s
888: learn: 19883.7617013 test: 28748.3168035 best: 28748.3168035 (888) total: 2.07s remaining: 21.2s
889: learn: 19882.0169076 test: 28746.4626187 best: 28746.4626187 (889) total: 2.07s remaining: 21.2s
890: learn: 19879.5118701 test: 28741.3416461 best: 28741.3416461 (890) total: 2.07s remaining: 21.2s
891: learn: 19874.6311895 test: 28735.0791693 best: 28735.0791693 (891) total: 2.08s remaining: 21.2s
892: learn: 19873.0715943 test: 28732.4337308 best: 28732.4337308 (892) total: 2.08s remaining: 21.2s
893: learn: 19870.2410311 test: 28731.4897701 best: 28731.4897701 (893) total: 2.08s remaining: 21.2s
894: learn: 19868.4844566 test: 28729.8050291 best: 28729.8050291 (894) total: 2.08s remaining: 21.2s
895: learn: 19867.4379243 test: 28730.0261934 best: 28729.8050291 (894) total: 2.08s remaining: 21.2s
896: learn: 19865.2828530 test: 28724.5127269 best: 28724.5127269 (896) total: 2.09s remaining: 21.2s
897: learn: 19864.2155458 test: 28723.6909694 best: 28723.6909694 (897) total: 2.09s remaining: 21.2s
898: learn: 19837.6975443 test: 28704.6052446 best: 28704.6052446 (898) total: 2.09s remaining: 21.2s
899: learn: 19836.6373328 test: 28703.4864208 best: 28703.4864208 (899) total: 2.09s remaining: 21.2s
900: learn: 19834.6571316 test: 28700.3347487 best: 28700.3347487 (900) total: 2.1s remaining: 21.2s
901: learn: 19832.9421870 test: 28698.5805935 best: 28698.5805935 (901) total: 2.1s remaining: 21.2s
902: learn: 19831.5746737 test: 28697.7567081 best: 28697.7567081 (902) total: 2.1s remaining: 21.2s
903: learn: 19828.1027011 test: 28692.1373746 best: 28692.1373746 (903) total: 2.1s remaining: 21.2s
904: learn: 19825.1726668 test: 28689.5302809 best: 28689.5302809 (904) total: 2.1s remaining: 21.2s
905: learn: 19823.4744428 test: 28687.7058163 best: 28687.7058163 (905) total: 2.11s remaining: 21.2s
906: learn: 19820.1038996 test: 28685.1557324 best: 28685.1557324 (906) total: 2.11s remaining: 21.1s
907: learn: 19818.7412286 test: 28684.3496984 best: 28684.3496984 (907) total: 2.11s remaining: 21.1s
908: learn: 19817.6939246 test: 28683.5605871 best: 28683.5605871 (908) total: 2.11s remaining: 21.1s
909: learn: 19814.2858592 test: 28678.6873610 best: 28678.6873610 (909) total: 2.12s remaining: 21.1s
910: learn: 19812.5492388 test: 28675.6550365 best: 28675.6550365 (910) total: 2.12s remaining: 21.1s
911: learn: 19811.0852181 test: 28673.7485842 best: 28673.7485842 (911) total: 2.12s remaining: 21.1s
912: learn: 19806.7358318 test: 28671.3119759 best: 28671.3119759 (912) total: 2.12s remaining: 21.1s
913: learn: 19804.8054645 test: 28667.9611000 best: 28667.9611000 (913) total: 2.13s remaining: 21.1s
914: learn: 19803.4424615 test: 28665.4945583 best: 28665.4945583 (914) total: 2.13s remaining: 21.1s
915: learn: 19800.6291508 test: 28665.6241074 best: 28665.4945583 (914) total: 2.13s remaining: 21.1s
916: learn: 19799.3424747 test: 28664.8384774 best: 28664.8384774 (916) total: 2.13s remaining: 21.1s
917: learn: 19797.8373410 test: 28662.2614158 best: 28662.2614158 (917) total: 2.13s remaining: 21.1s
918: learn: 19796.8568114 test: 28662.1256696 best: 28662.1256696 (918) total: 2.14s remaining: 21.1s
919: learn: 19795.1893065 test: 28660.3440453 best: 28660.3440453 (919) total: 2.14s remaining: 21.1s
920: learn: 19794.0053142 test: 28660.2064947 best: 28660.2064947 (920) total: 2.14s remaining: 21.1s
921: learn: 19792.2282492 test: 28658.4705737 best: 28658.4705737 (921) total: 2.14s remaining: 21.1s
922: learn: 19790.8930827 test: 28657.7874900 best: 28657.7874900 (922) total: 2.14s remaining: 21.1s
923: learn: 19789.1479574 test: 28654.7602689 best: 28654.7602689 (923) total: 2.15s remaining: 21.1s
924: learn: 19783.6856852 test: 28644.9943711 best: 28644.9943711 (924) total: 2.15s remaining: 21.1s
925: learn: 19782.5667434 test: 28643.6604128 best: 28643.6604128 (925) total: 2.15s remaining: 21.1s
926: learn: 19780.8920849 test: 28642.0527233 best: 28642.0527233 (926) total: 2.15s remaining: 21.1s
927: learn: 19779.8659945 test: 28641.2793426 best: 28641.2793426 (927) total: 2.15s remaining: 21.1s
928: learn: 19776.6301841 test: 28636.0076135 best: 28636.0076135 (928) total: 2.16s remaining: 21.1s
929: learn: 19775.1039350 test: 28634.7803031 best: 28634.7803031 (929) total: 2.16s remaining: 21.1s
930: learn: 19773.9760948 test: 28634.7712844 best: 28634.7712844 (930) total: 2.16s remaining: 21.1s
931: learn: 19772.2007785 test: 28632.5154900 best: 28632.5154900 (931) total: 2.16s remaining: 21s
932: learn: 19770.4874172 test: 28630.9905165 best: 28630.9905165 (932) total: 2.17s remaining: 21s
933: learn: 19756.9131220 test: 28626.3701932 best: 28626.3701932 (933) total: 2.17s remaining: 21s
934: learn: 19753.1530368 test: 28621.8045314 best: 28621.8045314 (934) total: 2.17s remaining: 21s
935: learn: 19751.5437972 test: 28620.1708994 best: 28620.1708994 (935) total: 2.17s remaining: 21s
936: learn: 19749.8276731 test: 28617.1919278 best: 28617.1919278 (936) total: 2.17s remaining: 21s
937: learn: 19730.7219139 test: 28604.8970626 best: 28604.8970626 (937) total: 2.18s remaining: 21s
938: learn: 19729.4725731 test: 28604.1020268 best: 28604.1020268 (938) total: 2.18s remaining: 21s
939: learn: 19728.2348112 test: 28603.2729590 best: 28603.2729590 (939) total: 2.18s remaining: 21s
940: learn: 19726.6266406 test: 28601.6472847 best: 28601.6472847 (940) total: 2.18s remaining: 21s
941: learn: 19724.6629657 test: 28598.0710730 best: 28598.0710730 (941) total: 2.18s remaining: 21s
942: learn: 19723.6558217 test: 28597.2811196 best: 28597.2811196 (942) total: 2.19s remaining: 21s
943: learn: 19722.0541630 test: 28595.5514801 best: 28595.5514801 (943) total: 2.19s remaining: 21s
944: learn: 19720.4827279 test: 28595.5963435 best: 28595.5514801 (943) total: 2.19s remaining: 21s
945: learn: 19717.7570617 test: 28591.9287775 best: 28591.9287775 (945) total: 2.19s remaining: 21s
946: learn: 19716.6423123 test: 28591.6377258 best: 28591.6377258 (946) total: 2.19s remaining: 21s
947: learn: 19715.7708934 test: 28589.4569587 best: 28589.4569587 (947) total: 2.2s remaining: 21s
948: learn: 19688.8957814 test: 28572.3673836 best: 28572.3673836 (948) total: 2.2s remaining: 21s
949: learn: 19686.0146876 test: 28568.1475763 best: 28568.1475763 (949) total: 2.2s remaining: 21s
950: learn: 19669.6044565 test: 28561.1819114 best: 28561.1819114 (950) total: 2.2s remaining: 21s
951: learn: 19668.3137847 test: 28560.5240835 best: 28560.5240835 (951) total: 2.21s remaining: 21s
952: learn: 19657.2537737 test: 28555.4849947 best: 28555.4849947 (952) total: 2.21s remaining: 21s
953: learn: 19655.4308536 test: 28555.3397438 best: 28555.3397438 (953) total: 2.21s remaining: 21s
954: learn: 19654.0087018 test: 28552.8753383 best: 28552.8753383 (954) total: 2.21s remaining: 21s
955: learn: 19652.5345069 test: 28551.7128019 best: 28551.7128019 (955) total: 2.21s remaining: 21s
956: learn: 19649.9473688 test: 28549.3048635 best: 28549.3048635 (956) total: 2.22s remaining: 20.9s
957: learn: 19648.9521617 test: 28548.2538359 best: 28548.2538359 (957) total: 2.22s remaining: 20.9s
958: learn: 19647.7436903 test: 28547.5035705 best: 28547.5035705 (958) total: 2.22s remaining: 20.9s
959: learn: 19646.1762040 test: 28545.8271791 best: 28545.8271791 (959) total: 2.22s remaining: 20.9s
960: learn: 19644.5203063 test: 28544.0298860 best: 28544.0298860 (960) total: 2.23s remaining: 20.9s
961: learn: 19620.6780844 test: 28530.8558700 best: 28530.8558700 (961) total: 2.23s remaining: 20.9s
962: learn: 19619.7546402 test: 28530.7021254 best: 28530.7021254 (962) total: 2.23s remaining: 20.9s
963: learn: 19618.8780000 test: 28529.8206886 best: 28529.8206886 (963) total: 2.23s remaining: 20.9s
964: learn: 19617.6877164 test: 28528.4181161 best: 28528.4181161 (964) total: 2.23s remaining: 20.9s
965: learn: 19615.8920804 test: 28528.2704103 best: 28528.2704103 (965) total: 2.24s remaining: 20.9s
966: learn: 19614.5016624 test: 28526.7886680 best: 28526.7886680 (966) total: 2.24s remaining: 20.9s
967: learn: 19613.0487069 test: 28525.6342477 best: 28525.6342477 (967) total: 2.24s remaining: 20.9s
968: learn: 19611.4879330 test: 28524.0586528 best: 28524.0586528 (968) total: 2.24s remaining: 20.9s
969: learn: 19608.3109289 test: 28521.7775002 best: 28521.7775002 (969) total: 2.25s remaining: 20.9s
970: learn: 19607.2744338 test: 28521.0450795 best: 28521.0450795 (970) total: 2.25s remaining: 20.9s
971: learn: 19606.0260834 test: 28520.2678920 best: 28520.2678920 (971) total: 2.25s remaining: 20.9s
972: learn: 19605.0361366 test: 28519.6047669 best: 28519.6047669 (972) total: 2.25s remaining: 20.9s
973: learn: 19604.1922437 test: 28518.5798649 best: 28518.5798649 (973) total: 2.25s remaining: 20.9s
974: learn: 19597.6012133 test: 28520.6757265 best: 28518.5798649 (973) total: 2.26s remaining: 20.9s
975: learn: 19593.9906971 test: 28516.3046813 best: 28516.3046813 (975) total: 2.26s remaining: 20.9s
976: learn: 19591.3006106 test: 28513.8389181 best: 28513.8389181 (976) total: 2.26s remaining: 20.9s
977: learn: 19589.7464243 test: 28512.3636545 best: 28512.3636545 (977) total: 2.26s remaining: 20.9s
978: learn: 19586.4386523 test: 28508.6420041 best: 28508.6420041 (978) total: 2.27s remaining: 20.9s
979: learn: 19585.5254427 test: 28507.8614247 best: 28507.8614247 (979) total: 2.27s remaining: 20.9s
980: learn: 19584.0027591 test: 28506.2334516 best: 28506.2334516 (980) total: 2.27s remaining: 20.9s
981: learn: 19580.1042236 test: 28501.1415023 best: 28501.1415023 (981) total: 2.27s remaining: 20.9s
982: learn: 19576.9261434 test: 28493.3812293 best: 28493.3812293 (982) total: 2.27s remaining: 20.9s
983: learn: 19575.3885576 test: 28491.9270770 best: 28491.9270770 (983) total: 2.28s remaining: 20.9s
984: learn: 19574.3270127 test: 28491.2546297 best: 28491.2546297 (984) total: 2.28s remaining: 20.9s
985: learn: 19573.2869356 test: 28490.5341611 best: 28490.5341611 (985) total: 2.28s remaining: 20.8s
986: learn: 19567.6531384 test: 28481.8991666 best: 28481.8991666 (986) total: 2.28s remaining: 20.8s
987: learn: 19566.9359856 test: 28481.1510379 best: 28481.1510379 (987) total: 2.28s remaining: 20.8s
988: learn: 19565.8808634 test: 28480.4818268 best: 28480.4818268 (988) total: 2.29s remaining: 20.8s
989: learn: 19562.3533206 test: 28480.4740408 best: 28480.4740408 (989) total: 2.29s remaining: 20.8s
990: learn: 19559.2203850 test: 28476.4002553 best: 28476.4002553 (990) total: 2.29s remaining: 20.8s
991: learn: 19558.5052017 test: 28476.3986689 best: 28476.3986689 (991) total: 2.29s remaining: 20.8s
992: learn: 19556.9356615 test: 28474.8145786 best: 28474.8145786 (992) total: 2.29s remaining: 20.8s
993: learn: 19541.7338829 test: 28470.6770063 best: 28470.6770063 (993) total: 2.3s remaining: 20.8s
994: learn: 19540.5324708 test: 28469.5749822 best: 28469.5749822 (994) total: 2.3s remaining: 20.8s
995: learn: 19539.3782662 test: 28468.8489153 best: 28468.8489153 (995) total: 2.3s remaining: 20.8s
996: learn: 19537.9030462 test: 28467.4067668 best: 28467.4067668 (996) total: 2.3s remaining: 20.8s
997: learn: 19537.0840554 test: 28467.5240698 best: 28467.4067668 (996) total: 2.31s remaining: 20.8s
998: learn: 19536.3633091 test: 28467.0252455 best: 28467.0252455 (998) total: 2.31s remaining: 20.8s
999: learn: 19534.9662878 test: 28465.9022280 best: 28465.9022280 (999) total: 2.31s remaining: 20.8s
1000: learn: 19533.7637083 test: 28465.1788068 best: 28465.1788068 (1000) total: 2.31s remaining: 20.8s
1001: learn: 19530.9131297 test: 28461.1755448 best: 28461.1755448 (1001) total: 2.32s remaining: 20.8s
1002: learn: 19528.5754420 test: 28459.3472763 best: 28459.3472763 (1002) total: 2.32s remaining: 20.8s
1003: learn: 19527.6352768 test: 28459.4741395 best: 28459.3472763 (1002) total: 2.32s remaining: 20.8s
1004: learn: 19526.7532478 test: 28459.3322825 best: 28459.3322825 (1004) total: 2.32s remaining: 20.8s
1005: learn: 19518.6525523 test: 28452.1714288 best: 28452.1714288 (1005) total: 2.33s remaining: 20.8s
1006: learn: 19498.0481928 test: 28428.5743571 best: 28428.5743571 (1006) total: 2.33s remaining: 20.8s
1007: learn: 19497.0306655 test: 28427.4101455 best: 28427.4101455 (1007) total: 2.33s remaining: 20.8s
1008: learn: 19495.6507106 test: 28426.3207606 best: 28426.3207606 (1008) total: 2.33s remaining: 20.8s
1009: learn: 19494.7782442 test: 28426.3083609 best: 28426.3083609 (1009) total: 2.33s remaining: 20.8s
1010: learn: 19493.5898927 test: 28425.5855279 best: 28425.5855279 (1010) total: 2.34s remaining: 20.8s
1011: learn: 19492.4030609 test: 28424.9973906 best: 28424.9973906 (1011) total: 2.34s remaining: 20.8s
1012: learn: 19489.2680198 test: 28419.5718392 best: 28419.5718392 (1012) total: 2.34s remaining: 20.8s
1013: learn: 19488.5648355 test: 28419.4397239 best: 28419.4397239 (1013) total: 2.34s remaining: 20.8s
1014: learn: 19471.4607576 test: 28404.8255149 best: 28404.8255149 (1014) total: 2.35s remaining: 20.8s
1015: learn: 19470.7458194 test: 28404.7092602 best: 28404.7092602 (1015) total: 2.35s remaining: 20.8s
1016: learn: 19469.5759132 test: 28403.2632681 best: 28403.2632681 (1016) total: 2.35s remaining: 20.8s
1017: learn: 19466.3708707 test: 28395.8794438 best: 28395.8794438 (1017) total: 2.35s remaining: 20.7s
1018: learn: 19465.5233943 test: 28395.7328147 best: 28395.7328147 (1018) total: 2.35s remaining: 20.7s
1019: learn: 19463.3994681 test: 28395.6625132 best: 28395.6625132 (1019) total: 2.35s remaining: 20.7s
1020: learn: 19461.0684868 test: 28393.3952346 best: 28393.3952346 (1020) total: 2.36s remaining: 20.7s
1021: learn: 19459.9036734 test: 28390.4196627 best: 28390.4196627 (1021) total: 2.36s remaining: 20.7s
1022: learn: 19457.7901031 test: 28390.3587949 best: 28390.3587949 (1022) total: 2.36s remaining: 20.7s
1023: learn: 19455.1306048 test: 28387.3820136 best: 28387.3820136 (1023) total: 2.36s remaining: 20.7s
1024: learn: 19454.2057409 test: 28387.5182070 best: 28387.3820136 (1023) total: 2.37s remaining: 20.7s
1025: learn: 19452.8525464 test: 28386.4338606 best: 28386.4338606 (1025) total: 2.37s remaining: 20.7s
1026: learn: 19432.5541560 test: 28375.6855288 best: 28375.6855288 (1026) total: 2.37s remaining: 20.7s
1027: learn: 19431.8734235 test: 28375.0404247 best: 28375.0404247 (1027) total: 2.37s remaining: 20.7s
1028: learn: 19431.1758017 test: 28375.0041178 best: 28375.0041178 (1028) total: 2.38s remaining: 20.7s
1029: learn: 19430.2915535 test: 28373.9861265 best: 28373.9861265 (1029) total: 2.38s remaining: 20.7s
1030: learn: 19429.4504482 test: 28373.9972607 best: 28373.9861265 (1029) total: 2.38s remaining: 20.7s
1031: learn: 19427.2266237 test: 28372.7468284 best: 28372.7468284 (1031) total: 2.38s remaining: 20.7s
1032: learn: 19423.5011469 test: 28365.4048004 best: 28365.4048004 (1032) total: 2.38s remaining: 20.7s
1033: learn: 19422.3705960 test: 28364.5258524 best: 28364.5258524 (1033) total: 2.38s remaining: 20.7s
1034: learn: 19421.0928163 test: 28362.2391593 best: 28362.2391593 (1034) total: 2.39s remaining: 20.7s
1035: learn: 19420.0547987 test: 28360.9108499 best: 28360.9108499 (1035) total: 2.39s remaining: 20.7s
1036: learn: 19419.1796360 test: 28360.4163260 best: 28360.4163260 (1036) total: 2.39s remaining: 20.7s
1037: learn: 19418.3402999 test: 28360.2384092 best: 28360.2384092 (1037) total: 2.39s remaining: 20.7s
1038: learn: 19417.1646305 test: 28358.5078196 best: 28358.5078196 (1038) total: 2.4s remaining: 20.7s
1039: learn: 19400.9328067 test: 28346.8759843 best: 28346.8759843 (1039) total: 2.4s remaining: 20.7s
1040: learn: 19398.6582065 test: 28344.8930934 best: 28344.8930934 (1040) total: 2.4s remaining: 20.7s
1041: learn: 19397.8084437 test: 28342.8873471 best: 28342.8873471 (1041) total: 2.4s remaining: 20.7s
1042: learn: 19396.7146132 test: 28342.1840793 best: 28342.1840793 (1042) total: 2.4s remaining: 20.6s
1043: learn: 19382.0904935 test: 28332.2620125 best: 28332.2620125 (1043) total: 2.41s remaining: 20.6s
1044: learn: 19379.7120484 test: 28329.6867155 best: 28329.6867155 (1044) total: 2.41s remaining: 20.6s
1045: learn: 19377.8970508 test: 28328.6716582 best: 28328.6716582 (1045) total: 2.41s remaining: 20.6s
1046: learn: 19377.2039793 test: 28328.5756476 best: 28328.5756476 (1046) total: 2.41s remaining: 20.6s
1047: learn: 19376.5200626 test: 28328.4887216 best: 28328.4887216 (1047) total: 2.42s remaining: 20.6s
1048: learn: 19375.5495759 test: 28326.3452674 best: 28326.3452674 (1048) total: 2.42s remaining: 20.6s
1049: learn: 19373.9317557 test: 28326.2007617 best: 28326.2007617 (1049) total: 2.42s remaining: 20.6s
1050: learn: 19373.0986201 test: 28326.4237683 best: 28326.2007617 (1049) total: 2.42s remaining: 20.6s
1051: learn: 19372.0782250 test: 28325.1443837 best: 28325.1443837 (1051) total: 2.42s remaining: 20.6s
1052: learn: 19371.2718687 test: 28325.0563694 best: 28325.0563694 (1052) total: 2.42s remaining: 20.6s
1053: learn: 19369.9569397 test: 28323.9781836 best: 28323.9781836 (1053) total: 2.43s remaining: 20.6s
1054: learn: 19368.6425649 test: 28322.5441320 best: 28322.5441320 (1054) total: 2.43s remaining: 20.6s
1055: learn: 19367.7776575 test: 28322.0533846 best: 28322.0533846 (1055) total: 2.43s remaining: 20.6s
1056: learn: 19366.4707272 test: 28321.0087291 best: 28321.0087291 (1056) total: 2.43s remaining: 20.6s
1057: learn: 19365.4339832 test: 28319.4250807 best: 28319.4250807 (1057) total: 2.44s remaining: 20.6s
1058: learn: 19363.2727204 test: 28315.7501583 best: 28315.7501583 (1058) total: 2.44s remaining: 20.6s
1059: learn: 19361.8149747 test: 28314.3489358 best: 28314.3489358 (1059) total: 2.44s remaining: 20.6s
1060: learn: 19360.7175460 test: 28312.9648129 best: 28312.9648129 (1060) total: 2.44s remaining: 20.6s
1061: learn: 19359.7099997 test: 28311.7113500 best: 28311.7113500 (1061) total: 2.44s remaining: 20.6s
1062: learn: 19358.8998635 test: 28311.7817570 best: 28311.7113500 (1061) total: 2.45s remaining: 20.6s
1063: learn: 19357.3104009 test: 28311.6613791 best: 28311.6613791 (1063) total: 2.45s remaining: 20.6s
1064: learn: 19354.4086320 test: 28310.5496542 best: 28310.5496542 (1064) total: 2.45s remaining: 20.6s
1065: learn: 19352.5689903 test: 28308.0991503 best: 28308.0991503 (1065) total: 2.45s remaining: 20.6s
1066: learn: 19331.1116775 test: 28298.5858549 best: 28298.5858549 (1066) total: 2.46s remaining: 20.6s
1067: learn: 19330.2636712 test: 28297.5911556 best: 28297.5911556 (1067) total: 2.46s remaining: 20.6s
1068: learn: 19329.4594637 test: 28297.6639708 best: 28297.5911556 (1067) total: 2.46s remaining: 20.6s
1069: learn: 19328.5154276 test: 28295.5584921 best: 28295.5584921 (1069) total: 2.46s remaining: 20.6s
1070: learn: 19327.7230992 test: 28294.8189950 best: 28294.8189950 (1070) total: 2.46s remaining: 20.6s
1071: learn: 19326.1476365 test: 28294.6941272 best: 28294.6941272 (1071) total: 2.47s remaining: 20.6s
1072: learn: 19325.2918684 test: 28294.8466465 best: 28294.6941272 (1071) total: 2.47s remaining: 20.6s
1073: learn: 19323.9987087 test: 28293.7738695 best: 28293.7738695 (1073) total: 2.47s remaining: 20.5s
1074: learn: 19323.1818798 test: 28293.4195508 best: 28293.4195508 (1074) total: 2.47s remaining: 20.5s
1075: learn: 19322.2557196 test: 28292.7302209 best: 28292.7302209 (1075) total: 2.48s remaining: 20.5s
1076: learn: 19321.6504734 test: 28292.0376972 best: 28292.0376972 (1076) total: 2.48s remaining: 20.5s
1077: learn: 19320.2699581 test: 28288.8105236 best: 28288.8105236 (1077) total: 2.48s remaining: 20.5s
1078: learn: 19319.4883925 test: 28288.9466553 best: 28288.8105236 (1077) total: 2.48s remaining: 20.5s
1079: learn: 19318.8184426 test: 28288.8184717 best: 28288.8105236 (1077) total: 2.48s remaining: 20.5s
1080: learn: 19318.0672066 test: 28288.1101613 best: 28288.1101613 (1080) total: 2.49s remaining: 20.5s
1081: learn: 19317.4686710 test: 28287.4197671 best: 28287.4197671 (1081) total: 2.49s remaining: 20.5s
1082: learn: 19316.8228764 test: 28286.7292433 best: 28286.7292433 (1082) total: 2.49s remaining: 20.5s
1083: learn: 19316.0095828 test: 28286.8384018 best: 28286.7292433 (1082) total: 2.49s remaining: 20.5s
1084: learn: 19315.0807901 test: 28284.7508027 best: 28284.7508027 (1084) total: 2.5s remaining: 20.5s
1085: learn: 19314.2591791 test: 28284.0568988 best: 28284.0568988 (1085) total: 2.5s remaining: 20.5s
1086: learn: 19313.4193456 test: 28284.2040887 best: 28284.0568988 (1085) total: 2.5s remaining: 20.5s
1087: learn: 19312.6051539 test: 28284.0496716 best: 28284.0496716 (1087) total: 2.5s remaining: 20.5s
1088: learn: 19311.8776489 test: 28284.2227719 best: 28284.0496716 (1087) total: 2.51s remaining: 20.5s
1089: learn: 19311.0933818 test: 28284.1830160 best: 28284.0496716 (1087) total: 2.51s remaining: 20.5s
1090: learn: 19310.3980818 test: 28283.4177001 best: 28283.4177001 (1090) total: 2.51s remaining: 20.5s
1091: learn: 19309.0447371 test: 28280.5527044 best: 28280.5527044 (1091) total: 2.51s remaining: 20.5s
1092: learn: 19308.3056611 test: 28279.8447506 best: 28279.8447506 (1092) total: 2.52s remaining: 20.5s
1093: learn: 19307.0685284 test: 28279.8893558 best: 28279.8447506 (1092) total: 2.52s remaining: 20.5s
1094: learn: 19306.2796267 test: 28279.5984862 best: 28279.5984862 (1094) total: 2.52s remaining: 20.5s
1095: learn: 19305.4369790 test: 28278.4470663 best: 28278.4470663 (1095) total: 2.52s remaining: 20.5s
1096: learn: 19303.2811936 test: 28277.3012064 best: 28277.3012064 (1096) total: 2.52s remaining: 20.5s
1097: learn: 19302.6669265 test: 28277.2899199 best: 28277.2899199 (1097) total: 2.53s remaining: 20.5s
1098: learn: 19301.8972669 test: 28277.1462318 best: 28277.1462318 (1098) total: 2.53s remaining: 20.5s
1099: learn: 19301.2714614 test: 28276.3410841 best: 28276.3410841 (1099) total: 2.53s remaining: 20.5s
1100: learn: 19300.6489666 test: 28275.6568832 best: 28275.6568832 (1100) total: 2.53s remaining: 20.5s
1101: learn: 19299.3816343 test: 28274.5585821 best: 28274.5585821 (1101) total: 2.54s remaining: 20.5s
1102: learn: 19298.7305643 test: 28274.4405501 best: 28274.4405501 (1102) total: 2.54s remaining: 20.5s
1103: learn: 19297.4727322 test: 28273.3530392 best: 28273.3530392 (1103) total: 2.54s remaining: 20.5s
1104: learn: 19296.4972154 test: 28271.8533571 best: 28271.8533571 (1104) total: 2.54s remaining: 20.5s
1105: learn: 19279.3625557 test: 28266.2999549 best: 28266.2999549 (1105) total: 2.54s remaining: 20.5s
1106: learn: 19278.6265592 test: 28265.3183137 best: 28265.3183137 (1106) total: 2.55s remaining: 20.5s
1107: learn: 19277.1969316 test: 28266.1764854 best: 28265.3183137 (1106) total: 2.55s remaining: 20.5s
1108: learn: 19275.6784707 test: 28266.0409971 best: 28265.3183137 (1106) total: 2.55s remaining: 20.5s
1109: learn: 19275.0715173 test: 28265.3345069 best: 28265.3183137 (1106) total: 2.55s remaining: 20.4s
1110: learn: 19273.8246210 test: 28262.5500389 best: 28262.5500389 (1110) total: 2.56s remaining: 20.4s
1111: learn: 19273.0752420 test: 28262.4360599 best: 28262.4360599 (1111) total: 2.56s remaining: 20.4s
1112: learn: 19270.3708665 test: 28254.2162189 best: 28254.2162189 (1112) total: 2.56s remaining: 20.4s
1113: learn: 19269.5485595 test: 28253.2418789 best: 28253.2418789 (1113) total: 2.56s remaining: 20.4s
1114: learn: 19268.8444281 test: 28253.3721467 best: 28253.2418789 (1113) total: 2.56s remaining: 20.4s
1115: learn: 19266.7787505 test: 28249.0872679 best: 28249.0872679 (1115) total: 2.57s remaining: 20.4s
1116: learn: 19252.2701099 test: 28245.1102488 best: 28245.1102488 (1116) total: 2.57s remaining: 20.4s
1117: learn: 19251.1855727 test: 28243.4665404 best: 28243.4665404 (1117) total: 2.57s remaining: 20.4s
1118: learn: 19250.5320784 test: 28243.8209653 best: 28243.4665404 (1117) total: 2.57s remaining: 20.4s
1119: learn: 19249.5772383 test: 28242.2906798 best: 28242.2906798 (1119) total: 2.58s remaining: 20.4s
1120: learn: 19248.3471649 test: 28241.2639845 best: 28241.2639845 (1120) total: 2.58s remaining: 20.4s
1121: learn: 19247.7472506 test: 28241.1868074 best: 28241.1868074 (1121) total: 2.58s remaining: 20.4s
1122: learn: 19246.7731093 test: 28239.9342877 best: 28239.9342877 (1122) total: 2.58s remaining: 20.4s
1123: learn: 19245.9735263 test: 28239.3791850 best: 28239.3791850 (1123) total: 2.58s remaining: 20.4s
1124: learn: 19245.0260698 test: 28238.2274741 best: 28238.2274741 (1124) total: 2.59s remaining: 20.4s
1125: learn: 19244.2259840 test: 28237.2449028 best: 28237.2449028 (1125) total: 2.59s remaining: 20.4s
1126: learn: 19243.2422553 test: 28236.0459842 best: 28236.0459842 (1126) total: 2.59s remaining: 20.4s
1127: learn: 19241.7428680 test: 28235.1311272 best: 28235.1311272 (1127) total: 2.59s remaining: 20.4s
1128: learn: 19241.1527940 test: 28234.4234517 best: 28234.4234517 (1128) total: 2.6s remaining: 20.4s
1129: learn: 19239.8062885 test: 28234.3854032 best: 28234.3854032 (1129) total: 2.6s remaining: 20.4s
1130: learn: 19238.5101536 test: 28233.4358281 best: 28233.4358281 (1130) total: 2.6s remaining: 20.4s
1131: learn: 19237.7079549 test: 28233.2994707 best: 28233.2994707 (1131) total: 2.6s remaining: 20.4s
1132: learn: 19236.9133582 test: 28233.4485374 best: 28233.2994707 (1131) total: 2.6s remaining: 20.4s
1133: learn: 19236.1742468 test: 28233.3418685 best: 28233.2994707 (1131) total: 2.61s remaining: 20.4s
1134: learn: 19234.8308122 test: 28233.1025521 best: 28233.1025521 (1134) total: 2.61s remaining: 20.4s
1135: learn: 19233.6467131 test: 28233.0119946 best: 28233.0119946 (1135) total: 2.61s remaining: 20.4s
1136: learn: 19232.8561211 test: 28232.4338899 best: 28232.4338899 (1136) total: 2.61s remaining: 20.4s
1137: learn: 19231.6743761 test: 28232.3001471 best: 28232.3001471 (1137) total: 2.62s remaining: 20.4s
1138: learn: 19230.9088484 test: 28231.6266839 best: 28231.6266839 (1138) total: 2.62s remaining: 20.4s
1139: learn: 19229.9777537 test: 28230.4937053 best: 28230.4937053 (1139) total: 2.62s remaining: 20.4s
1140: learn: 19229.2181969 test: 28230.4164340 best: 28230.4164340 (1140) total: 2.62s remaining: 20.4s
1141: learn: 19228.5435754 test: 28229.6114131 best: 28229.6114131 (1141) total: 2.62s remaining: 20.4s
1142: learn: 19227.1019074 test: 28229.5165403 best: 28229.5165403 (1142) total: 2.63s remaining: 20.4s
1143: learn: 19205.8801950 test: 28215.1755772 best: 28215.1755772 (1143) total: 2.63s remaining: 20.4s
1144: learn: 19204.4191634 test: 28215.0468725 best: 28215.0468725 (1144) total: 2.63s remaining: 20.3s
1145: learn: 19203.6251143 test: 28215.1576951 best: 28215.0468725 (1144) total: 2.63s remaining: 20.3s
1146: learn: 19202.9105584 test: 28215.2650561 best: 28215.0468725 (1144) total: 2.63s remaining: 20.3s
1147: learn: 19202.3233878 test: 28215.1765683 best: 28215.0468725 (1144) total: 2.64s remaining: 20.3s
1148: learn: 19201.7599827 test: 28214.4925977 best: 28214.4925977 (1148) total: 2.64s remaining: 20.3s
1149: learn: 19201.0411683 test: 28214.0322766 best: 28214.0322766 (1149) total: 2.64s remaining: 20.3s
1150: learn: 19200.2663841 test: 28214.1723643 best: 28214.0322766 (1149) total: 2.64s remaining: 20.3s
1151: learn: 19197.9137017 test: 28213.2918463 best: 28213.2918463 (1151) total: 2.65s remaining: 20.3s
1152: learn: 19197.3009425 test: 28212.8488953 best: 28212.8488953 (1152) total: 2.65s remaining: 20.3s
1153: learn: 19195.9871217 test: 28212.6040514 best: 28212.6040514 (1153) total: 2.65s remaining: 20.3s
1154: learn: 19194.9579994 test: 28211.8578523 best: 28211.8578523 (1154) total: 2.65s remaining: 20.3s
1155: learn: 19194.3496820 test: 28211.0801069 best: 28211.0801069 (1155) total: 2.65s remaining: 20.3s
1156: learn: 19193.6410587 test: 28210.8966991 best: 28210.8966991 (1156) total: 2.66s remaining: 20.3s
1157: learn: 19192.9034133 test: 28210.2257899 best: 28210.2257899 (1157) total: 2.66s remaining: 20.3s
1158: learn: 19191.9732647 test: 28209.3506504 best: 28209.3506504 (1158) total: 2.66s remaining: 20.3s
1159: learn: 19191.3901652 test: 28209.2440409 best: 28209.2440409 (1159) total: 2.66s remaining: 20.3s
1160: learn: 19190.1245348 test: 28207.7728463 best: 28207.7728463 (1160) total: 2.66s remaining: 20.3s
1161: learn: 19175.8679730 test: 28203.8693375 best: 28203.8693375 (1161) total: 2.67s remaining: 20.3s
1162: learn: 19161.8314037 test: 28200.0691993 best: 28200.0691993 (1162) total: 2.67s remaining: 20.3s
1163: learn: 19160.5146643 test: 28200.0027638 best: 28200.0027638 (1163) total: 2.67s remaining: 20.3s
1164: learn: 19159.3604617 test: 28199.9602609 best: 28199.9602609 (1164) total: 2.67s remaining: 20.3s
1165: learn: 19158.2392030 test: 28197.8565355 best: 28197.8565355 (1165) total: 2.67s remaining: 20.3s
1166: learn: 19145.7193612 test: 28185.6089265 best: 28185.6089265 (1166) total: 2.68s remaining: 20.3s
1167: learn: 19144.4548968 test: 28185.3331227 best: 28185.3331227 (1167) total: 2.68s remaining: 20.3s
1168: learn: 19143.7484479 test: 28184.9953024 best: 28184.9953024 (1168) total: 2.68s remaining: 20.3s
1169: learn: 19142.4606333 test: 28184.7732662 best: 28184.7732662 (1169) total: 2.68s remaining: 20.3s
1170: learn: 19140.0778399 test: 28178.8979186 best: 28178.8979186 (1170) total: 2.69s remaining: 20.2s
1171: learn: 19139.0733836 test: 28178.1943383 best: 28178.1943383 (1171) total: 2.69s remaining: 20.3s
1172: learn: 19138.2093251 test: 28178.2961235 best: 28178.1943383 (1171) total: 2.69s remaining: 20.3s
1173: learn: 19137.6641033 test: 28177.6465972 best: 28177.6465972 (1173) total: 2.69s remaining: 20.2s
1174: learn: 19136.5274739 test: 28177.5923565 best: 28177.5923565 (1174) total: 2.69s remaining: 20.2s
1175: learn: 19123.0872563 test: 28173.6387205 best: 28173.6387205 (1175) total: 2.7s remaining: 20.2s
1176: learn: 19122.3690967 test: 28173.1295671 best: 28173.1295671 (1176) total: 2.7s remaining: 20.2s
1177: learn: 19121.5711954 test: 28172.6293418 best: 28172.6293418 (1177) total: 2.7s remaining: 20.2s
1178: learn: 19120.9827642 test: 28171.8994587 best: 28171.8994587 (1178) total: 2.7s remaining: 20.2s
1179: learn: 19119.6026092 test: 28171.8040337 best: 28171.8040337 (1179) total: 2.71s remaining: 20.2s
1180: learn: 19119.0651145 test: 28171.1481872 best: 28171.1481872 (1180) total: 2.71s remaining: 20.2s
1181: learn: 19118.3578754 test: 28171.1535697 best: 28171.1481872 (1180) total: 2.71s remaining: 20.2s
1182: learn: 19117.6121534 test: 28171.2996526 best: 28171.1481872 (1180) total: 2.71s remaining: 20.2s
1183: learn: 19116.9605862 test: 28171.3151551 best: 28171.1481872 (1180) total: 2.71s remaining: 20.2s
1184: learn: 19114.0741226 test: 28167.6526577 best: 28167.6526577 (1184) total: 2.72s remaining: 20.2s
1185: learn: 19113.3977109 test: 28167.7401915 best: 28167.6526577 (1184) total: 2.72s remaining: 20.2s
1186: learn: 19111.4793780 test: 28166.4601194 best: 28166.4601194 (1186) total: 2.72s remaining: 20.2s
1187: learn: 19110.7779643 test: 28166.4803161 best: 28166.4601194 (1186) total: 2.72s remaining: 20.2s
1188: learn: 19110.0594370 test: 28165.8322331 best: 28165.8322331 (1188) total: 2.72s remaining: 20.2s
1189: learn: 19108.7883244 test: 28165.7592235 best: 28165.7592235 (1189) total: 2.73s remaining: 20.2s
1190: learn: 19104.0281711 test: 28157.3939180 best: 28157.3939180 (1190) total: 2.73s remaining: 20.2s
1191: learn: 19080.7141861 test: 28144.1401357 best: 28144.1401357 (1191) total: 2.73s remaining: 20.2s
1192: learn: 19078.6777468 test: 28142.4111479 best: 28142.4111479 (1192) total: 2.73s remaining: 20.2s
1193: learn: 19078.1250209 test: 28142.5423344 best: 28142.4111479 (1192) total: 2.74s remaining: 20.2s
1194: learn: 19064.6684780 test: 28138.1988261 best: 28138.1988261 (1194) total: 2.74s remaining: 20.2s
1195: learn: 19063.2889047 test: 28138.0859014 best: 28138.0859014 (1195) total: 2.74s remaining: 20.2s
1196: learn: 19062.5269204 test: 28136.7362584 best: 28136.7362584 (1196) total: 2.74s remaining: 20.2s
1197: learn: 19061.9711873 test: 28136.1087506 best: 28136.1087506 (1197) total: 2.75s remaining: 20.2s
1198: learn: 19060.5887757 test: 28135.9814840 best: 28135.9814840 (1198) total: 2.75s remaining: 20.2s
1199: learn: 19059.9995188 test: 28135.3839547 best: 28135.3839547 (1199) total: 2.75s remaining: 20.2s
1200: learn: 19059.4202002 test: 28134.6104765 best: 28134.6104765 (1200) total: 2.75s remaining: 20.2s
1201: learn: 19037.3208564 test: 28122.4990646 best: 28122.4990646 (1201) total: 2.75s remaining: 20.2s
1202: learn: 19036.0663294 test: 28122.4385616 best: 28122.4385616 (1202) total: 2.76s remaining: 20.2s
1203: learn: 19034.9301409 test: 28121.4640177 best: 28121.4640177 (1203) total: 2.76s remaining: 20.2s
1204: learn: 19034.2242218 test: 28120.8213000 best: 28120.8213000 (1204) total: 2.76s remaining: 20.1s
1205: learn: 19032.2838434 test: 28115.6921307 best: 28115.6921307 (1205) total: 2.76s remaining: 20.1s
1206: learn: 19031.5745615 test: 28115.0598988 best: 28115.0598988 (1206) total: 2.76s remaining: 20.1s
1207: learn: 19030.2132390 test: 28114.9513971 best: 28114.9513971 (1207) total: 2.77s remaining: 20.1s
1208: learn: 19013.6463598 test: 28101.8504569 best: 28101.8504569 (1208) total: 2.77s remaining: 20.1s
1209: learn: 19007.2595905 test: 28101.1615768 best: 28101.1615768 (1209) total: 2.77s remaining: 20.1s
1210: learn: 18985.6556660 test: 28083.8225965 best: 28083.8225965 (1210) total: 2.77s remaining: 20.1s
1211: learn: 18984.5158719 test: 28082.3602132 best: 28082.3602132 (1211) total: 2.77s remaining: 20.1s
1212: learn: 18983.1658222 test: 28082.2601441 best: 28082.2601441 (1212) total: 2.78s remaining: 20.1s
1213: learn: 18982.6463520 test: 28081.6212865 best: 28081.6212865 (1213) total: 2.78s remaining: 20.1s
1214: learn: 18981.5266718 test: 28080.6737048 best: 28080.6737048 (1214) total: 2.78s remaining: 20.1s
1215: learn: 18980.9231525 test: 28080.0006648 best: 28080.0006648 (1215) total: 2.78s remaining: 20.1s
1216: learn: 18979.8688744 test: 28079.0598987 best: 28079.0598987 (1216) total: 2.79s remaining: 20.1s
1217: learn: 18972.8068742 test: 28080.9211069 best: 28079.0598987 (1216) total: 2.79s remaining: 20.1s
1218: learn: 18972.1042898 test: 28080.2525860 best: 28079.0598987 (1216) total: 2.79s remaining: 20.1s
1219: learn: 18970.9823012 test: 28079.2697538 best: 28079.0598987 (1216) total: 2.79s remaining: 20.1s
1220: learn: 18970.2451212 test: 28078.5076937 best: 28078.5076937 (1220) total: 2.8s remaining: 20.1s
1221: learn: 18969.0177071 test: 28078.3848141 best: 28078.3848141 (1221) total: 2.8s remaining: 20.1s
1222: learn: 18967.8186467 test: 28078.3648127 best: 28078.3648127 (1222) total: 2.8s remaining: 20.1s
1223: learn: 18967.2648404 test: 28077.6811843 best: 28077.6811843 (1223) total: 2.81s remaining: 20.1s
1224: learn: 18966.7307030 test: 28077.5858423 best: 28077.5858423 (1224) total: 2.81s remaining: 20.1s
1225: learn: 18965.8644085 test: 28076.3980130 best: 28076.3980130 (1225) total: 2.81s remaining: 20.1s
1226: learn: 18965.3236514 test: 28076.3891815 best: 28076.3891815 (1226) total: 2.82s remaining: 20.2s
1227: learn: 18964.7015564 test: 28076.3444375 best: 28076.3444375 (1227) total: 2.82s remaining: 20.2s
1228: learn: 18962.8996629 test: 28076.1788398 best: 28076.1788398 (1228) total: 2.83s remaining: 20.2s
1229: learn: 18962.1764304 test: 28074.8852182 best: 28074.8852182 (1229) total: 2.83s remaining: 20.2s
1230: learn: 18959.8443234 test: 28068.6865849 best: 28068.6865849 (1230) total: 2.83s remaining: 20.2s
1231: learn: 18958.5251696 test: 28068.5723388 best: 28068.5723388 (1231) total: 2.84s remaining: 20.2s
1232: learn: 18957.8538002 test: 28068.4360321 best: 28068.4360321 (1232) total: 2.84s remaining: 20.2s
1233: learn: 18957.2525579 test: 28067.7773548 best: 28067.7773548 (1233) total: 2.85s remaining: 20.2s
1234: learn: 18954.1489395 test: 28068.3896861 best: 28067.7773548 (1233) total: 2.85s remaining: 20.2s
1235: learn: 18953.6636094 test: 28067.7615527 best: 28067.7615527 (1235) total: 2.85s remaining: 20.2s
1236: learn: 18953.1386057 test: 28067.1540470 best: 28067.1540470 (1236) total: 2.86s remaining: 20.2s
1237: learn: 18950.5531087 test: 28060.6595351 best: 28060.6595351 (1237) total: 2.86s remaining: 20.2s
1238: learn: 18949.9525381 test: 28059.9132472 best: 28059.9132472 (1238) total: 2.86s remaining: 20.2s
1239: learn: 18933.9337805 test: 28053.6026154 best: 28053.6026154 (1239) total: 2.87s remaining: 20.2s
1240: learn: 18933.2855386 test: 28053.1899287 best: 28053.1899287 (1240) total: 2.87s remaining: 20.3s
1241: learn: 18932.8034223 test: 28052.5789942 best: 28052.5789942 (1241) total: 2.87s remaining: 20.3s
1242: learn: 18932.1680175 test: 28052.4297180 best: 28052.4297180 (1242) total: 2.88s remaining: 20.3s
1243: learn: 18930.9797320 test: 28052.3774371 best: 28052.3774371 (1243) total: 2.88s remaining: 20.3s
1244: learn: 18929.6782805 test: 28052.2785766 best: 28052.2785766 (1244) total: 2.88s remaining: 20.3s
1245: learn: 18929.1379328 test: 28051.6852039 best: 28051.6852039 (1245) total: 2.88s remaining: 20.3s
1246: learn: 18928.2945154 test: 28050.9677560 best: 28050.9677560 (1246) total: 2.89s remaining: 20.3s
1247: learn: 18927.5810384 test: 28050.2630942 best: 28050.2630942 (1247) total: 2.89s remaining: 20.3s
1248: learn: 18926.4204464 test: 28050.2387426 best: 28050.2387426 (1248) total: 2.89s remaining: 20.3s
1249: learn: 18925.6015947 test: 28049.9684150 best: 28049.9684150 (1249) total: 2.9s remaining: 20.3s
1250: learn: 18924.6761225 test: 28049.3241141 best: 28049.3241141 (1250) total: 2.9s remaining: 20.3s
1251: learn: 18923.5949901 test: 28048.3687290 best: 28048.3687290 (1251) total: 2.9s remaining: 20.3s
1252: learn: 18909.8997407 test: 28043.3938662 best: 28043.3938662 (1252) total: 2.91s remaining: 20.3s
1253: learn: 18909.4006767 test: 28042.7882576 best: 28042.7882576 (1253) total: 2.91s remaining: 20.3s
1254: learn: 18906.3237085 test: 28039.2768121 best: 28039.2768121 (1254) total: 2.92s remaining: 20.3s
1255: learn: 18905.1683073 test: 28039.1910354 best: 28039.1910354 (1255) total: 2.92s remaining: 20.3s
1256: learn: 18900.5278881 test: 28033.6986900 best: 28033.6986900 (1256) total: 2.92s remaining: 20.3s
1257: learn: 18882.0233140 test: 28025.8110295 best: 28025.8110295 (1257) total: 2.93s remaining: 20.3s
1258: learn: 18866.0105242 test: 28019.1186750 best: 28019.1186750 (1258) total: 2.93s remaining: 20.3s
1259: learn: 18865.3589616 test: 28017.9286490 best: 28017.9286490 (1259) total: 2.93s remaining: 20.3s
1260: learn: 18864.7287701 test: 28017.9731401 best: 28017.9286490 (1259) total: 2.93s remaining: 20.3s
1261: learn: 18864.0654168 test: 28017.8948756 best: 28017.8948756 (1261) total: 2.93s remaining: 20.3s
1262: learn: 18863.4431443 test: 28017.8339447 best: 28017.8339447 (1262) total: 2.94s remaining: 20.3s
1263: learn: 18861.5562127 test: 28015.5789747 best: 28015.5789747 (1263) total: 2.94s remaining: 20.3s
1264: learn: 18860.9389328 test: 28015.1526276 best: 28015.1526276 (1264) total: 2.94s remaining: 20.3s
1265: learn: 18860.2317495 test: 28014.5460866 best: 28014.5460866 (1265) total: 2.94s remaining: 20.3s
1266: learn: 18859.7405421 test: 28013.9035351 best: 28013.9035351 (1266) total: 2.94s remaining: 20.3s
1267: learn: 18858.7301484 test: 28014.0094909 best: 28013.9035351 (1266) total: 2.95s remaining: 20.3s
1268: learn: 18855.7113831 test: 28014.5318694 best: 28013.9035351 (1266) total: 2.95s remaining: 20.3s
1269: learn: 18836.0074732 test: 28002.7748809 best: 28002.7748809 (1269) total: 2.95s remaining: 20.3s
1270: learn: 18834.9788980 test: 28002.6171465 best: 28002.6171465 (1270) total: 2.96s remaining: 20.3s
1271: learn: 18833.3274027 test: 28002.4224702 best: 28002.4224702 (1271) total: 2.96s remaining: 20.3s
1272: learn: 18830.6856299 test: 27997.7358323 best: 27997.7358323 (1272) total: 2.96s remaining: 20.3s
1273: learn: 18828.7536573 test: 27994.2370097 best: 27994.2370097 (1273) total: 2.96s remaining: 20.3s
1274: learn: 18827.7036054 test: 27993.3129207 best: 27993.3129207 (1274) total: 2.96s remaining: 20.3s
1275: learn: 18826.5871068 test: 27991.0506978 best: 27991.0506978 (1275) total: 2.97s remaining: 20.3s
1276: learn: 18825.5825436 test: 27991.2569419 best: 27991.0506978 (1275) total: 2.97s remaining: 20.3s
1277: learn: 18825.0603202 test: 27991.4687207 best: 27991.0506978 (1275) total: 2.97s remaining: 20.3s
1278: learn: 18824.2573168 test: 27990.5468534 best: 27990.5468534 (1278) total: 2.97s remaining: 20.3s
1279: learn: 18822.2945046 test: 27986.5219136 best: 27986.5219136 (1279) total: 2.98s remaining: 20.3s
1280: learn: 18821.0490963 test: 27986.4305226 best: 27986.4305226 (1280) total: 2.98s remaining: 20.3s
1281: learn: 18820.5428805 test: 27985.8395379 best: 27985.8395379 (1281) total: 2.98s remaining: 20.3s
1282: learn: 18819.9056814 test: 27985.7937688 best: 27985.7937688 (1282) total: 2.98s remaining: 20.3s
1283: learn: 18818.7987483 test: 27985.7754653 best: 27985.7754653 (1283) total: 2.98s remaining: 20.3s
1284: learn: 18817.9062801 test: 27984.7602575 best: 27984.7602575 (1284) total: 2.99s remaining: 20.3s
1285: learn: 18816.6699332 test: 27984.6709448 best: 27984.6709448 (1285) total: 2.99s remaining: 20.3s
1286: learn: 18815.9999155 test: 27984.5367398 best: 27984.5367398 (1286) total: 2.99s remaining: 20.3s
1287: learn: 18814.7676475 test: 27984.4492449 best: 27984.4492449 (1287) total: 2.99s remaining: 20.3s
1288: learn: 18813.5918120 test: 27982.9327814 best: 27982.9327814 (1288) total: 3s remaining: 20.3s
1289: learn: 18812.3641256 test: 27982.8479337 best: 27982.8479337 (1289) total: 3s remaining: 20.3s
1290: learn: 18811.8805628 test: 27982.2492266 best: 27982.2492266 (1290) total: 3s remaining: 20.2s
1291: learn: 18811.3978921 test: 27981.6535404 best: 27981.6535404 (1291) total: 3s remaining: 20.2s
1292: learn: 18810.7824916 test: 27981.0032187 best: 27981.0032187 (1292) total: 3s remaining: 20.2s
1293: learn: 18810.0418919 test: 27980.7089530 best: 27980.7089530 (1293) total: 3.01s remaining: 20.2s
1294: learn: 18809.0696054 test: 27980.6669982 best: 27980.6669982 (1294) total: 3.01s remaining: 20.2s
1295: learn: 18808.0726270 test: 27980.5037663 best: 27980.5037663 (1295) total: 3.01s remaining: 20.2s
1296: learn: 18807.4910228 test: 27980.2726996 best: 27980.2726996 (1296) total: 3.02s remaining: 20.2s
1297: learn: 18806.2958471 test: 27980.1949263 best: 27980.1949263 (1297) total: 3.02s remaining: 20.2s
1298: learn: 18805.3021612 test: 27980.2681650 best: 27980.1949263 (1297) total: 3.02s remaining: 20.2s
1299: learn: 18804.8067108 test: 27979.6982941 best: 27979.6982941 (1299) total: 3.02s remaining: 20.2s
1300: learn: 18803.5940136 test: 27979.6093647 best: 27979.6093647 (1300) total: 3.02s remaining: 20.2s
1301: learn: 18790.9758152 test: 27970.4118484 best: 27970.4118484 (1301) total: 3.03s remaining: 20.2s
1302: learn: 18773.8887817 test: 27963.6093333 best: 27963.6093333 (1302) total: 3.03s remaining: 20.2s
1303: learn: 18773.2442265 test: 27962.5066136 best: 27962.5066136 (1303) total: 3.03s remaining: 20.2s
1304: learn: 18772.5945193 test: 27962.6505442 best: 27962.5066136 (1303) total: 3.03s remaining: 20.2s
1305: learn: 18771.9447950 test: 27962.2321312 best: 27962.2321312 (1305) total: 3.04s remaining: 20.2s
1306: learn: 18771.3586013 test: 27961.8046409 best: 27961.8046409 (1306) total: 3.04s remaining: 20.2s
1307: learn: 18770.7113851 test: 27961.9456726 best: 27961.8046409 (1306) total: 3.04s remaining: 20.2s
1308: learn: 18752.4320145 test: 27952.0431565 best: 27952.0431565 (1308) total: 3.05s remaining: 20.2s
1309: learn: 18751.9925984 test: 27951.4409885 best: 27951.4409885 (1309) total: 3.05s remaining: 20.2s
1310: learn: 18751.5389006 test: 27950.8501854 best: 27950.8501854 (1310) total: 3.05s remaining: 20.2s
1311: learn: 18750.0385357 test: 27947.3312058 best: 27947.3312058 (1311) total: 3.06s remaining: 20.2s
1312: learn: 18748.1020707 test: 27943.2041960 best: 27943.2041960 (1312) total: 3.06s remaining: 20.2s
1313: learn: 18746.1823282 test: 27939.2721307 best: 27939.2721307 (1313) total: 3.06s remaining: 20.2s
1314: learn: 18745.6402460 test: 27939.2844119 best: 27939.2721307 (1313) total: 3.06s remaining: 20.2s
1315: learn: 18743.7383500 test: 27935.3010349 best: 27935.3010349 (1315) total: 3.07s remaining: 20.2s
1316: learn: 18742.5456873 test: 27935.2012787 best: 27935.2012787 (1316) total: 3.07s remaining: 20.2s
1317: learn: 18741.9164771 test: 27934.6950062 best: 27934.6950062 (1317) total: 3.07s remaining: 20.2s
1318: learn: 18739.0431887 test: 27932.6535853 best: 27932.6535853 (1318) total: 3.07s remaining: 20.2s
1319: learn: 18738.5887814 test: 27932.0648414 best: 27932.0648414 (1319) total: 3.08s remaining: 20.2s
1320: learn: 18737.5777664 test: 27931.1399997 best: 27931.1399997 (1320) total: 3.08s remaining: 20.2s
1321: learn: 18736.9567183 test: 27930.5545706 best: 27930.5545706 (1321) total: 3.08s remaining: 20.2s
1322: learn: 18736.1200564 test: 27929.7542865 best: 27929.7542865 (1322) total: 3.08s remaining: 20.2s
1323: learn: 18733.5061076 test: 27929.8419602 best: 27929.7542865 (1322) total: 3.08s remaining: 20.2s
1324: learn: 18733.0348970 test: 27929.2622032 best: 27929.2622032 (1324) total: 3.09s remaining: 20.2s
1325: learn: 18732.0265826 test: 27928.2852319 best: 27928.2852319 (1325) total: 3.09s remaining: 20.2s
1326: learn: 18731.5418410 test: 27927.6410322 best: 27927.6410322 (1326) total: 3.09s remaining: 20.2s
1327: learn: 18731.0872822 test: 27927.0581149 best: 27927.0581149 (1327) total: 3.09s remaining: 20.2s
1328: learn: 18730.4672528 test: 27926.9769007 best: 27926.9769007 (1328) total: 3.1s remaining: 20.2s
1329: learn: 18729.8016865 test: 27926.5058149 best: 27926.5058149 (1329) total: 3.1s remaining: 20.2s
1330: learn: 18729.1086270 test: 27925.9449679 best: 27925.9449679 (1330) total: 3.1s remaining: 20.2s
1331: learn: 18727.9275423 test: 27925.8340832 best: 27925.8340832 (1331) total: 3.1s remaining: 20.2s
1332: learn: 18727.4780647 test: 27925.2513635 best: 27925.2513635 (1332) total: 3.1s remaining: 20.2s
1333: learn: 18726.9034922 test: 27925.1724175 best: 27925.1724175 (1333) total: 3.11s remaining: 20.2s
1334: learn: 18709.2563644 test: 27919.1980178 best: 27919.1980178 (1334) total: 3.11s remaining: 20.2s
1335: learn: 18694.1178919 test: 27908.8689352 best: 27908.8689352 (1335) total: 3.11s remaining: 20.2s
1336: learn: 18693.1632566 test: 27908.7813269 best: 27908.7813269 (1336) total: 3.11s remaining: 20.2s
1337: learn: 18675.7558784 test: 27895.6583588 best: 27895.6583588 (1337) total: 3.12s remaining: 20.2s
1338: learn: 18673.1117611 test: 27894.0470265 best: 27894.0470265 (1338) total: 3.12s remaining: 20.2s
1339: learn: 18672.6575615 test: 27893.9812044 best: 27893.9812044 (1339) total: 3.12s remaining: 20.2s
1340: learn: 18672.1053264 test: 27894.0793139 best: 27893.9812044 (1339) total: 3.12s remaining: 20.2s
1341: learn: 18670.9457526 test: 27893.9940016 best: 27893.9812044 (1339) total: 3.12s remaining: 20.2s
1342: learn: 18655.9984286 test: 27887.8533136 best: 27887.8533136 (1342) total: 3.13s remaining: 20.2s
1343: learn: 18655.4636390 test: 27887.1774861 best: 27887.1774861 (1343) total: 3.13s remaining: 20.1s
1344: learn: 18651.3327779 test: 27879.6918805 best: 27879.6918805 (1344) total: 3.13s remaining: 20.1s
1345: learn: 18650.2962861 test: 27879.6654187 best: 27879.6654187 (1345) total: 3.13s remaining: 20.1s
1346: learn: 18649.4030825 test: 27878.6187249 best: 27878.6187249 (1346) total: 3.13s remaining: 20.1s
1347: learn: 18648.4274356 test: 27877.7455071 best: 27877.7455071 (1347) total: 3.14s remaining: 20.1s
1348: learn: 18647.9128604 test: 27877.1379673 best: 27877.1379673 (1348) total: 3.14s remaining: 20.1s
1349: learn: 18646.7689986 test: 27877.0600802 best: 27877.0600802 (1349) total: 3.14s remaining: 20.1s
1350: learn: 18646.3313547 test: 27877.2016068 best: 27877.0600802 (1349) total: 3.14s remaining: 20.1s
1351: learn: 18645.1909224 test: 27877.1236921 best: 27877.0600802 (1349) total: 3.15s remaining: 20.1s
1352: learn: 18644.7328805 test: 27876.5679088 best: 27876.5679088 (1352) total: 3.15s remaining: 20.1s
1353: learn: 18644.2949703 test: 27876.0306991 best: 27876.0306991 (1353) total: 3.15s remaining: 20.1s
1354: learn: 18643.7577212 test: 27876.1305074 best: 27876.0306991 (1353) total: 3.15s remaining: 20.1s
1355: learn: 18643.2747224 test: 27875.6375280 best: 27875.6375280 (1355) total: 3.15s remaining: 20.1s
1356: learn: 18623.6428300 test: 27860.4045603 best: 27860.4045603 (1356) total: 3.15s remaining: 20.1s
1357: learn: 18620.0549009 test: 27860.8871320 best: 27860.4045603 (1356) total: 3.16s remaining: 20.1s
1358: learn: 18617.2448211 test: 27861.3091512 best: 27860.4045603 (1356) total: 3.16s remaining: 20.1s
1359: learn: 18616.8170255 test: 27861.4461718 best: 27860.4045603 (1356) total: 3.16s remaining: 20.1s
1360: learn: 18616.3172644 test: 27860.8734619 best: 27860.4045603 (1356) total: 3.16s remaining: 20.1s
1361: learn: 18615.7212674 test: 27860.3916135 best: 27860.3916135 (1361) total: 3.17s remaining: 20.1s
1362: learn: 18615.1732787 test: 27859.3370540 best: 27859.3370540 (1362) total: 3.17s remaining: 20.1s
1363: learn: 18614.6713456 test: 27859.3565994 best: 27859.3370540 (1362) total: 3.17s remaining: 20.1s
1364: learn: 18614.1244761 test: 27858.9365782 best: 27858.9365782 (1364) total: 3.17s remaining: 20.1s
1365: learn: 18613.6863188 test: 27858.3994531 best: 27858.3994531 (1365) total: 3.17s remaining: 20.1s
1366: learn: 18597.0253148 test: 27852.4620086 best: 27852.4620086 (1366) total: 3.18s remaining: 20.1s
1367: learn: 18596.0321934 test: 27851.1071195 best: 27851.1071195 (1367) total: 3.18s remaining: 20.1s
1368: learn: 18595.4314357 test: 27850.5438829 best: 27850.5438829 (1368) total: 3.18s remaining: 20.1s
1369: learn: 18594.8434250 test: 27850.1204718 best: 27850.1204718 (1369) total: 3.18s remaining: 20.1s
1370: learn: 18592.1598281 test: 27842.7730313 best: 27842.7730313 (1370) total: 3.19s remaining: 20.1s
1371: learn: 18591.6951463 test: 27842.1717765 best: 27842.1717765 (1371) total: 3.19s remaining: 20s
1372: learn: 18590.6811918 test: 27842.1150193 best: 27842.1150193 (1372) total: 3.19s remaining: 20s
1373: learn: 18590.2574515 test: 27841.5393345 best: 27841.5393345 (1373) total: 3.19s remaining: 20s
1374: learn: 18589.8534173 test: 27840.9943079 best: 27840.9943079 (1374) total: 3.19s remaining: 20s
1375: learn: 18588.8415044 test: 27839.5425404 best: 27839.5425404 (1375) total: 3.2s remaining: 20s
1376: learn: 18588.4173122 test: 27838.9991878 best: 27838.9991878 (1376) total: 3.2s remaining: 20s
1377: learn: 18575.9491976 test: 27835.7555328 best: 27835.7555328 (1377) total: 3.2s remaining: 20s
1378: learn: 18560.9703776 test: 27827.1829251 best: 27827.1829251 (1378) total: 3.2s remaining: 20s
1379: learn: 18560.5123503 test: 27826.6147177 best: 27826.6147177 (1379) total: 3.21s remaining: 20s
1380: learn: 18560.1109096 test: 27826.0953662 best: 27826.0953662 (1380) total: 3.21s remaining: 20s
1381: learn: 18559.5268191 test: 27825.1369261 best: 27825.1369261 (1381) total: 3.21s remaining: 20s
1382: learn: 18558.9835646 test: 27825.2510679 best: 27825.1369261 (1381) total: 3.21s remaining: 20s
1383: learn: 18558.4883177 test: 27825.3421406 best: 27825.1369261 (1381) total: 3.21s remaining: 20s
1384: learn: 18557.7251744 test: 27824.2904514 best: 27824.2904514 (1384) total: 3.21s remaining: 20s
1385: learn: 18556.7216104 test: 27824.2252012 best: 27824.2252012 (1385) total: 3.22s remaining: 20s
1386: learn: 18556.1424737 test: 27823.7510439 best: 27823.7510439 (1386) total: 3.22s remaining: 20s
1387: learn: 18540.8050978 test: 27818.2962229 best: 27818.2962229 (1387) total: 3.22s remaining: 20s
1388: learn: 18540.0371142 test: 27817.6516039 best: 27817.6516039 (1388) total: 3.22s remaining: 20s
1389: learn: 18539.6232603 test: 27817.7805063 best: 27817.6516039 (1388) total: 3.23s remaining: 20s
1390: learn: 18539.0648279 test: 27817.3566757 best: 27817.3566757 (1390) total: 3.23s remaining: 20s
1391: learn: 18538.6664922 test: 27816.8290780 best: 27816.8290780 (1391) total: 3.23s remaining: 20s
1392: learn: 18538.0723884 test: 27816.8449524 best: 27816.8290780 (1391) total: 3.23s remaining: 20s
1393: learn: 18537.5222100 test: 27816.9563339 best: 27816.8290780 (1391) total: 3.23s remaining: 20s
1394: learn: 18536.7662670 test: 27815.7984276 best: 27815.7984276 (1394) total: 3.24s remaining: 20s
1395: learn: 18535.6688190 test: 27815.7068813 best: 27815.7068813 (1395) total: 3.24s remaining: 20s
1396: learn: 18532.7681731 test: 27810.2617457 best: 27810.2617457 (1396) total: 3.24s remaining: 20s
1397: learn: 18532.2043174 test: 27810.1366899 best: 27810.1366899 (1397) total: 3.24s remaining: 20s
1398: learn: 18531.6695629 test: 27810.2748116 best: 27810.1366899 (1397) total: 3.25s remaining: 20s
1399: learn: 18516.2113124 test: 27801.6748841 best: 27801.6748841 (1399) total: 3.25s remaining: 20s
1400: learn: 18515.5994606 test: 27801.2206880 best: 27801.2206880 (1400) total: 3.25s remaining: 20s
1401: learn: 18515.0575975 test: 27801.2331515 best: 27801.2206880 (1400) total: 3.25s remaining: 20s
1402: learn: 18514.4417406 test: 27801.1593133 best: 27801.1593133 (1402) total: 3.25s remaining: 20s
1403: learn: 18513.8856207 test: 27801.1010036 best: 27801.1010036 (1403) total: 3.26s remaining: 19.9s
1404: learn: 18513.1278490 test: 27800.4121806 best: 27800.4121806 (1404) total: 3.26s remaining: 19.9s
1405: learn: 18512.1907398 test: 27799.5381381 best: 27799.5381381 (1405) total: 3.26s remaining: 19.9s
1406: learn: 18511.5853964 test: 27799.0749471 best: 27799.0749471 (1406) total: 3.26s remaining: 19.9s
1407: learn: 18498.4445784 test: 27798.8314821 best: 27798.8314821 (1407) total: 3.27s remaining: 19.9s
1408: learn: 18497.6669294 test: 27798.1982381 best: 27798.1982381 (1408) total: 3.27s remaining: 19.9s
1409: learn: 18495.8110519 test: 27797.4491486 best: 27797.4491486 (1409) total: 3.27s remaining: 19.9s
1410: learn: 18495.2611179 test: 27797.6894595 best: 27797.4491486 (1409) total: 3.27s remaining: 19.9s
1411: learn: 18494.6822467 test: 27796.8396310 best: 27796.8396310 (1411) total: 3.27s remaining: 19.9s
1412: learn: 18494.1375882 test: 27796.4231183 best: 27796.4231183 (1412) total: 3.28s remaining: 19.9s
1413: learn: 18493.7040024 test: 27795.9885976 best: 27795.9885976 (1413) total: 3.28s remaining: 19.9s
1414: learn: 18488.7275421 test: 27787.4252260 best: 27787.4252260 (1414) total: 3.28s remaining: 19.9s
1415: learn: 18488.2456032 test: 27787.4303005 best: 27787.4252260 (1414) total: 3.28s remaining: 19.9s
1416: learn: 18474.4242361 test: 27776.5813849 best: 27776.5813849 (1416) total: 3.29s remaining: 19.9s
1417: learn: 18473.4765602 test: 27775.2496608 best: 27775.2496608 (1417) total: 3.29s remaining: 19.9s
1418: learn: 18472.9841385 test: 27775.3861799 best: 27775.2496608 (1417) total: 3.29s remaining: 19.9s
1419: learn: 18464.3650872 test: 27775.5394745 best: 27775.2496608 (1417) total: 3.29s remaining: 19.9s
1420: learn: 18462.8466705 test: 27772.6325706 best: 27772.6325706 (1420) total: 3.29s remaining: 19.9s
1421: learn: 18460.2446451 test: 27772.1755370 best: 27772.1755370 (1421) total: 3.3s remaining: 19.9s
1422: learn: 18459.8649687 test: 27771.6695753 best: 27771.6695753 (1422) total: 3.3s remaining: 19.9s
1423: learn: 18459.4593818 test: 27771.1485557 best: 27771.1485557 (1423) total: 3.3s remaining: 19.9s
1424: learn: 18445.9978835 test: 27763.1665472 best: 27763.1665472 (1424) total: 3.3s remaining: 19.9s
1425: learn: 18432.1714683 test: 27757.7752469 best: 27757.7752469 (1425) total: 3.31s remaining: 19.9s
1426: learn: 18430.4425302 test: 27756.3925849 best: 27756.3925849 (1426) total: 3.31s remaining: 19.9s
1427: learn: 18415.4152863 test: 27752.1078395 best: 27752.1078395 (1427) total: 3.31s remaining: 19.9s
1428: learn: 18413.6674348 test: 27748.2984507 best: 27748.2984507 (1428) total: 3.31s remaining: 19.9s
1429: learn: 18412.9341521 test: 27747.5845278 best: 27747.5845278 (1429) total: 3.31s remaining: 19.9s
1430: learn: 18411.2441436 test: 27746.1724597 best: 27746.1724597 (1430) total: 3.32s remaining: 19.9s
1431: learn: 18410.7212977 test: 27745.1979071 best: 27745.1979071 (1431) total: 3.32s remaining: 19.9s
1432: learn: 18409.8146803 test: 27744.9411546 best: 27744.9411546 (1432) total: 3.32s remaining: 19.9s
1433: learn: 18408.5507092 test: 27743.8760235 best: 27743.8760235 (1433) total: 3.32s remaining: 19.9s
1434: learn: 18408.0461426 test: 27743.5017732 best: 27743.5017732 (1434) total: 3.33s remaining: 19.9s
1435: learn: 18407.6017565 test: 27743.0479947 best: 27743.0479947 (1435) total: 3.33s remaining: 19.9s
1436: learn: 18407.1065073 test: 27742.3216910 best: 27742.3216910 (1436) total: 3.33s remaining: 19.9s
1437: learn: 18406.4721852 test: 27741.7468615 best: 27741.7468615 (1437) total: 3.33s remaining: 19.8s
1438: learn: 18405.7352645 test: 27740.9425976 best: 27740.9425976 (1438) total: 3.33s remaining: 19.8s
1439: learn: 18405.2346837 test: 27740.5205580 best: 27740.5205580 (1439) total: 3.34s remaining: 19.8s
1440: learn: 18386.8828656 test: 27734.5486130 best: 27734.5486130 (1440) total: 3.34s remaining: 19.8s
1441: learn: 18386.3598912 test: 27733.5947403 best: 27733.5947403 (1441) total: 3.34s remaining: 19.8s
1442: learn: 18385.9304314 test: 27733.0749089 best: 27733.0749089 (1442) total: 3.34s remaining: 19.8s
1443: learn: 18385.1923213 test: 27732.4294735 best: 27732.4294735 (1443) total: 3.35s remaining: 19.8s
1444: learn: 18384.5488701 test: 27731.9066726 best: 27731.9066726 (1444) total: 3.35s remaining: 19.8s
1445: learn: 18383.9736007 test: 27731.5237011 best: 27731.5237011 (1445) total: 3.35s remaining: 19.8s
1446: learn: 18383.4596117 test: 27730.4264503 best: 27730.4264503 (1446) total: 3.35s remaining: 19.8s
1447: learn: 18382.9083075 test: 27729.5546280 best: 27729.5546280 (1447) total: 3.35s remaining: 19.8s
1448: learn: 18373.6047021 test: 27729.9403271 best: 27729.5546280 (1447) total: 3.36s remaining: 19.8s
1449: learn: 18373.1669886 test: 27729.4937572 best: 27729.4937572 (1449) total: 3.36s remaining: 19.8s
1450: learn: 18372.5929757 test: 27729.1203643 best: 27729.1203643 (1450) total: 3.36s remaining: 19.8s
1451: learn: 18372.0777207 test: 27728.1226682 best: 27728.1226682 (1451) total: 3.36s remaining: 19.8s
1452: learn: 18371.6821640 test: 27727.6037524 best: 27727.6037524 (1452) total: 3.37s remaining: 19.8s
1453: learn: 18371.1575636 test: 27727.6235127 best: 27727.6037524 (1452) total: 3.37s remaining: 19.8s
1454: learn: 18370.6772157 test: 27727.7172102 best: 27727.6037524 (1452) total: 3.37s remaining: 19.8s
1455: learn: 18369.9069862 test: 27727.6716973 best: 27727.6037524 (1452) total: 3.37s remaining: 19.8s
1456: learn: 18366.2608022 test: 27722.8330433 best: 27722.8330433 (1456) total: 3.38s remaining: 19.8s
1457: learn: 18363.1190878 test: 27717.8777350 best: 27717.8777350 (1457) total: 3.38s remaining: 19.8s
1458: learn: 18360.6603083 test: 27717.7077201 best: 27717.7077201 (1458) total: 3.38s remaining: 19.8s
1459: learn: 18359.4284298 test: 27716.6952363 best: 27716.6952363 (1459) total: 3.38s remaining: 19.8s
1460: learn: 18358.8543278 test: 27716.3025094 best: 27716.3025094 (1460) total: 3.38s remaining: 19.8s
1461: learn: 18356.5233980 test: 27715.7241884 best: 27715.7241884 (1461) total: 3.39s remaining: 19.8s
1462: learn: 18356.0236582 test: 27715.7936621 best: 27715.7241884 (1461) total: 3.39s remaining: 19.8s
1463: learn: 18355.6422871 test: 27715.9237485 best: 27715.7241884 (1461) total: 3.39s remaining: 19.8s
1464: learn: 18350.8050707 test: 27707.8138368 best: 27707.8138368 (1464) total: 3.39s remaining: 19.8s
1465: learn: 18338.8195179 test: 27704.6843009 best: 27704.6843009 (1465) total: 3.4s remaining: 19.8s
1466: learn: 18338.2399504 test: 27704.7981096 best: 27704.6843009 (1465) total: 3.4s remaining: 19.8s
1467: learn: 18337.5410247 test: 27704.1260509 best: 27704.1260509 (1467) total: 3.4s remaining: 19.8s
1468: learn: 18335.9565210 test: 27702.0213141 best: 27702.0213141 (1468) total: 3.4s remaining: 19.8s
1469: learn: 18335.1977178 test: 27701.9779076 best: 27701.9779076 (1469) total: 3.4s remaining: 19.8s
1470: learn: 18334.5561856 test: 27701.1675164 best: 27701.1675164 (1470) total: 3.41s remaining: 19.8s
1471: learn: 18333.6741176 test: 27699.9539217 best: 27699.9539217 (1471) total: 3.41s remaining: 19.7s
1472: learn: 18331.9009323 test: 27698.9044953 best: 27698.9044953 (1472) total: 3.41s remaining: 19.7s
1473: learn: 18331.0571278 test: 27697.6919739 best: 27697.6919739 (1473) total: 3.41s remaining: 19.7s
1474: learn: 18330.5479417 test: 27697.3285304 best: 27697.3285304 (1474) total: 3.42s remaining: 19.7s
1475: learn: 18330.1660070 test: 27696.8240893 best: 27696.8240893 (1475) total: 3.42s remaining: 19.7s
1476: learn: 18314.3167079 test: 27691.8150545 best: 27691.8150545 (1476) total: 3.42s remaining: 19.7s
1477: learn: 18313.6648672 test: 27691.4587078 best: 27691.4587078 (1477) total: 3.42s remaining: 19.7s
1478: learn: 18312.0527606 test: 27690.2027521 best: 27690.2027521 (1478) total: 3.42s remaining: 19.7s
1479: learn: 18311.4056283 test: 27689.8445032 best: 27689.8445032 (1479) total: 3.43s remaining: 19.7s
1480: learn: 18311.0110923 test: 27689.2615096 best: 27689.2615096 (1480) total: 3.43s remaining: 19.7s
1481: learn: 18309.6105891 test: 27688.9880724 best: 27688.9880724 (1481) total: 3.43s remaining: 19.7s
1482: learn: 18292.3427239 test: 27679.6204136 best: 27679.6204136 (1482) total: 3.43s remaining: 19.7s
1483: learn: 18291.8129607 test: 27679.4901626 best: 27679.4901626 (1483) total: 3.44s remaining: 19.7s
1484: learn: 18288.7799152 test: 27679.6971560 best: 27679.4901626 (1483) total: 3.44s remaining: 19.7s
1485: learn: 18285.9179537 test: 27676.6956133 best: 27676.6956133 (1485) total: 3.44s remaining: 19.7s
1486: learn: 18284.2263655 test: 27675.7997413 best: 27675.7997413 (1486) total: 3.44s remaining: 19.7s
1487: learn: 18283.5211517 test: 27675.2237368 best: 27675.2237368 (1487) total: 3.44s remaining: 19.7s
1488: learn: 18282.8634827 test: 27674.2636632 best: 27674.2636632 (1488) total: 3.45s remaining: 19.7s
1489: learn: 18278.1952113 test: 27666.1517654 best: 27666.1517654 (1489) total: 3.45s remaining: 19.7s
1490: learn: 18257.3562335 test: 27659.7053511 best: 27659.7053511 (1490) total: 3.45s remaining: 19.7s
1491: learn: 18247.0901071 test: 27647.5874995 best: 27647.5874995 (1491) total: 3.45s remaining: 19.7s
1492: learn: 18246.5808816 test: 27647.1756582 best: 27647.1756582 (1492) total: 3.46s remaining: 19.7s
1493: learn: 18244.8958394 test: 27644.8462019 best: 27644.8462019 (1493) total: 3.46s remaining: 19.7s
1494: learn: 18232.0491327 test: 27634.9266539 best: 27634.9266539 (1494) total: 3.46s remaining: 19.7s
1495: learn: 18231.3560138 test: 27634.2306673 best: 27634.2306673 (1495) total: 3.46s remaining: 19.7s
1496: learn: 18230.8672354 test: 27633.3113040 best: 27633.3113040 (1496) total: 3.46s remaining: 19.7s
1497: learn: 18230.1967866 test: 27632.8239970 best: 27632.8239970 (1497) total: 3.47s remaining: 19.7s
1498: learn: 18228.4697321 test: 27631.8375622 best: 27631.8375622 (1498) total: 3.47s remaining: 19.7s
1499: learn: 18219.5503423 test: 27632.3731016 best: 27631.8375622 (1498) total: 3.47s remaining: 19.7s
1500: learn: 18202.6380947 test: 27626.4192139 best: 27626.4192139 (1500) total: 3.47s remaining: 19.7s
1501: learn: 18202.2020909 test: 27626.5106300 best: 27626.4192139 (1500) total: 3.48s remaining: 19.7s
1502: learn: 18201.6890324 test: 27625.7987924 best: 27625.7987924 (1502) total: 3.48s remaining: 19.7s
1503: learn: 18184.1370124 test: 27610.5724794 best: 27610.5724794 (1503) total: 3.48s remaining: 19.7s
1504: learn: 18179.5384361 test: 27602.7638868 best: 27602.7638868 (1504) total: 3.48s remaining: 19.7s
1505: learn: 18179.1428409 test: 27602.9548791 best: 27602.7638868 (1504) total: 3.48s remaining: 19.7s
1506: learn: 18178.4805757 test: 27602.3459798 best: 27602.3459798 (1506) total: 3.49s remaining: 19.7s
1507: learn: 18178.0414505 test: 27601.8843398 best: 27601.8843398 (1507) total: 3.49s remaining: 19.6s
1508: learn: 18175.4831690 test: 27601.8719893 best: 27601.8719893 (1508) total: 3.49s remaining: 19.6s
1509: learn: 18165.4810561 test: 27601.8830824 best: 27601.8719893 (1508) total: 3.49s remaining: 19.6s
1510: learn: 18164.8020268 test: 27601.3275854 best: 27601.3275854 (1510) total: 3.5s remaining: 19.6s
1511: learn: 18158.8346515 test: 27598.1536861 best: 27598.1536861 (1511) total: 3.5s remaining: 19.6s
1512: learn: 18158.4791811 test: 27598.2799338 best: 27598.1536861 (1511) total: 3.5s remaining: 19.6s
1513: learn: 18157.8212065 test: 27598.2981614 best: 27598.1536861 (1511) total: 3.5s remaining: 19.6s
1514: learn: 18157.3231130 test: 27597.8455580 best: 27597.8455580 (1514) total: 3.5s remaining: 19.6s
1515: learn: 18144.7633043 test: 27593.1512103 best: 27593.1512103 (1515) total: 3.51s remaining: 19.6s
1516: learn: 18128.1677065 test: 27580.2829658 best: 27580.2829658 (1516) total: 3.51s remaining: 19.6s
1517: learn: 18125.8452418 test: 27579.7495655 best: 27579.7495655 (1517) total: 3.51s remaining: 19.6s
1518: learn: 18125.1898293 test: 27579.1150208 best: 27579.1150208 (1518) total: 3.51s remaining: 19.6s
1519: learn: 18120.8963508 test: 27570.6013271 best: 27570.6013271 (1519) total: 3.51s remaining: 19.6s
1520: learn: 18119.0740926 test: 27565.5890468 best: 27565.5890468 (1520) total: 3.52s remaining: 19.6s
1521: learn: 18118.3889561 test: 27564.7230897 best: 27564.7230897 (1521) total: 3.52s remaining: 19.6s
1522: learn: 18117.7419446 test: 27564.0149923 best: 27564.0149923 (1522) total: 3.52s remaining: 19.6s
1523: learn: 18103.0270407 test: 27548.5847645 best: 27548.5847645 (1523) total: 3.52s remaining: 19.6s
1524: learn: 18085.4865091 test: 27547.8987513 best: 27547.8987513 (1524) total: 3.52s remaining: 19.6s
1525: learn: 18084.9252520 test: 27547.0733946 best: 27547.0733946 (1525) total: 3.53s remaining: 19.6s
1526: learn: 18083.0098878 test: 27545.4708213 best: 27545.4708213 (1526) total: 3.53s remaining: 19.6s
1527: learn: 18082.3646983 test: 27544.8055526 best: 27544.8055526 (1527) total: 3.53s remaining: 19.6s
1528: learn: 18081.9241231 test: 27544.8924832 best: 27544.8055526 (1527) total: 3.53s remaining: 19.6s
1529: learn: 18081.2494744 test: 27544.0291906 best: 27544.0291906 (1529) total: 3.54s remaining: 19.6s
1530: learn: 18068.6231041 test: 27537.9054717 best: 27537.9054717 (1530) total: 3.54s remaining: 19.6s
1531: learn: 18068.1580657 test: 27537.8632435 best: 27537.8632435 (1531) total: 3.54s remaining: 19.6s
1532: learn: 18067.5274614 test: 27537.3086883 best: 27537.3086883 (1532) total: 3.54s remaining: 19.6s
1533: learn: 18066.0475268 test: 27533.0264605 best: 27533.0264605 (1533) total: 3.54s remaining: 19.6s
1534: learn: 18065.4109423 test: 27532.4115716 best: 27532.4115716 (1534) total: 3.55s remaining: 19.6s
1535: learn: 18064.8863114 test: 27532.0554194 best: 27532.0554194 (1535) total: 3.55s remaining: 19.6s
1536: learn: 18064.0715590 test: 27530.9032299 best: 27530.9032299 (1536) total: 3.55s remaining: 19.6s
1537: learn: 18063.6407561 test: 27530.3527532 best: 27530.3527532 (1537) total: 3.55s remaining: 19.6s
1538: learn: 18059.2864709 test: 27522.6683679 best: 27522.6683679 (1538) total: 3.56s remaining: 19.6s
1539: learn: 18058.4768860 test: 27521.5218330 best: 27521.5218330 (1539) total: 3.56s remaining: 19.5s
1540: learn: 18057.4765413 test: 27521.0671865 best: 27521.0671865 (1540) total: 3.56s remaining: 19.5s
1541: learn: 18056.1082593 test: 27520.3589704 best: 27520.3589704 (1541) total: 3.56s remaining: 19.5s
1542: learn: 18044.6997613 test: 27516.3258271 best: 27516.3258271 (1542) total: 3.56s remaining: 19.5s
1543: learn: 18044.0668824 test: 27515.7307736 best: 27515.7307736 (1543) total: 3.57s remaining: 19.5s
1544: learn: 18031.6022205 test: 27505.7489307 best: 27505.7489307 (1544) total: 3.57s remaining: 19.5s
1545: learn: 18031.0384674 test: 27505.0359436 best: 27505.0359436 (1545) total: 3.57s remaining: 19.5s
1546: learn: 18030.5525821 test: 27504.6909392 best: 27504.6909392 (1546) total: 3.57s remaining: 19.5s
1547: learn: 18017.6854280 test: 27500.5318968 best: 27500.5318968 (1547) total: 3.58s remaining: 19.5s
1548: learn: 18001.3080835 test: 27491.7954487 best: 27491.7954487 (1548) total: 3.58s remaining: 19.5s
1549: learn: 18000.4560175 test: 27491.0948373 best: 27491.0948373 (1549) total: 3.58s remaining: 19.5s
1550: learn: 18000.0239280 test: 27490.2622368 best: 27490.2622368 (1550) total: 3.58s remaining: 19.5s
1551: learn: 17999.1280744 test: 27490.3895331 best: 27490.2622368 (1550) total: 3.58s remaining: 19.5s
1552: learn: 17996.8867999 test: 27485.1298047 best: 27485.1298047 (1552) total: 3.59s remaining: 19.5s
1553: learn: 17996.0173501 test: 27483.7032179 best: 27483.7032179 (1553) total: 3.59s remaining: 19.5s
1554: learn: 17995.5380483 test: 27483.5917109 best: 27483.5917109 (1554) total: 3.59s remaining: 19.5s
1555: learn: 17994.9127485 test: 27482.9598620 best: 27482.9598620 (1555) total: 3.59s remaining: 19.5s
1556: learn: 17993.8136585 test: 27481.5258367 best: 27481.5258367 (1556) total: 3.6s remaining: 19.5s
1557: learn: 17993.4326003 test: 27481.2258431 best: 27481.2258431 (1557) total: 3.6s remaining: 19.5s
1558: learn: 17993.0827897 test: 27480.7886753 best: 27480.7886753 (1558) total: 3.6s remaining: 19.5s
1559: learn: 17992.6541729 test: 27479.9605733 best: 27479.9605733 (1559) total: 3.6s remaining: 19.5s
1560: learn: 17984.3512380 test: 27471.4214936 best: 27471.4214936 (1560) total: 3.6s remaining: 19.5s
1561: learn: 17972.8007692 test: 27467.9320696 best: 27467.9320696 (1561) total: 3.6s remaining: 19.5s
1562: learn: 17968.4514172 test: 27464.9822324 best: 27464.9822324 (1562) total: 3.61s remaining: 19.5s
1563: learn: 17965.9413900 test: 27462.1843329 best: 27462.1843329 (1563) total: 3.61s remaining: 19.5s
1564: learn: 17963.9795027 test: 27462.5534636 best: 27462.1843329 (1563) total: 3.61s remaining: 19.5s
1565: learn: 17963.3377434 test: 27461.8919312 best: 27461.8919312 (1565) total: 3.61s remaining: 19.5s
1566: learn: 17962.7079746 test: 27461.3055754 best: 27461.3055754 (1566) total: 3.62s remaining: 19.5s
1567: learn: 17961.8204712 test: 27461.2915325 best: 27461.2915325 (1567) total: 3.62s remaining: 19.5s
1568: learn: 17960.9439587 test: 27461.4852009 best: 27461.2915325 (1567) total: 3.62s remaining: 19.5s
1569: learn: 17960.4348727 test: 27461.1556201 best: 27461.1556201 (1569) total: 3.62s remaining: 19.5s
1570: learn: 17942.7418847 test: 27453.1733217 best: 27453.1733217 (1570) total: 3.63s remaining: 19.5s
1571: learn: 17941.9518602 test: 27452.0905445 best: 27452.0905445 (1571) total: 3.63s remaining: 19.4s
1572: learn: 17940.9788399 test: 27450.6118470 best: 27450.6118470 (1572) total: 3.63s remaining: 19.4s
1573: learn: 17938.7736061 test: 27449.1000214 best: 27449.1000214 (1573) total: 3.63s remaining: 19.4s
1574: learn: 17938.4545414 test: 27448.6595130 best: 27448.6595130 (1574) total: 3.63s remaining: 19.4s
1575: learn: 17922.1895233 test: 27442.2533910 best: 27442.2533910 (1575) total: 3.64s remaining: 19.4s
1576: learn: 17907.5248151 test: 27434.0521791 best: 27434.0521791 (1576) total: 3.64s remaining: 19.4s
1577: learn: 17907.1188868 test: 27433.5635028 best: 27433.5635028 (1577) total: 3.64s remaining: 19.4s
1578: learn: 17904.9647946 test: 27430.6362641 best: 27430.6362641 (1578) total: 3.64s remaining: 19.4s
1579: learn: 17904.4668445 test: 27430.2897277 best: 27430.2897277 (1579) total: 3.65s remaining: 19.4s
1580: learn: 17903.9752086 test: 27429.9650796 best: 27429.9650796 (1580) total: 3.65s remaining: 19.4s
1581: learn: 17888.2421561 test: 27419.4104638 best: 27419.4104638 (1581) total: 3.65s remaining: 19.4s
1582: learn: 17887.9239507 test: 27418.9707836 best: 27418.9707836 (1582) total: 3.65s remaining: 19.4s
1583: learn: 17876.2523726 test: 27413.6339257 best: 27413.6339257 (1583) total: 3.65s remaining: 19.4s
1584: learn: 17870.9215874 test: 27409.8571097 best: 27409.8571097 (1584) total: 3.66s remaining: 19.4s
1585: learn: 17870.1423884 test: 27409.6537712 best: 27409.6537712 (1585) total: 3.66s remaining: 19.4s
1586: learn: 17869.5010444 test: 27409.6196913 best: 27409.6196913 (1586) total: 3.66s remaining: 19.4s
1587: learn: 17868.8615649 test: 27409.5869145 best: 27409.5869145 (1587) total: 3.66s remaining: 19.4s
1588: learn: 17866.0245228 test: 27409.1771131 best: 27409.1771131 (1588) total: 3.67s remaining: 19.4s
1589: learn: 17865.6107210 test: 27408.3506030 best: 27408.3506030 (1589) total: 3.67s remaining: 19.4s
1590: learn: 17864.3156800 test: 27404.6724454 best: 27404.6724454 (1590) total: 3.67s remaining: 19.4s
1591: learn: 17863.8596081 test: 27404.5972662 best: 27404.5972662 (1591) total: 3.67s remaining: 19.4s
1592: learn: 17862.5581374 test: 27401.3160261 best: 27401.3160261 (1592) total: 3.67s remaining: 19.4s
1593: learn: 17860.0215632 test: 27394.3998508 best: 27394.3998508 (1593) total: 3.68s remaining: 19.4s
1594: learn: 17846.6105707 test: 27391.5192375 best: 27391.5192375 (1594) total: 3.68s remaining: 19.4s
1595: learn: 17845.9954830 test: 27390.8506918 best: 27390.8506918 (1595) total: 3.68s remaining: 19.4s
1596: learn: 17845.6163846 test: 27390.5468287 best: 27390.5468287 (1596) total: 3.68s remaining: 19.4s
1597: learn: 17840.4760679 test: 27389.6863398 best: 27389.6863398 (1597) total: 3.69s remaining: 19.4s
1598: learn: 17839.8606230 test: 27389.0600625 best: 27389.0600625 (1598) total: 3.69s remaining: 19.4s
1599: learn: 17839.0156707 test: 27389.2376531 best: 27389.0600625 (1598) total: 3.69s remaining: 19.4s
1600: learn: 17836.6206751 test: 27386.5305301 best: 27386.5305301 (1600) total: 3.69s remaining: 19.4s
1601: learn: 17834.7743900 test: 27385.0733783 best: 27385.0733783 (1601) total: 3.69s remaining: 19.4s
1602: learn: 17830.7017127 test: 27380.7421742 best: 27380.7421742 (1602) total: 3.7s remaining: 19.4s
1603: learn: 17829.5010019 test: 27377.6982881 best: 27377.6982881 (1603) total: 3.7s remaining: 19.4s
1604: learn: 17827.4437613 test: 27371.5185516 best: 27371.5185516 (1604) total: 3.7s remaining: 19.4s
1605: learn: 17826.0429860 test: 27370.4309753 best: 27370.4309753 (1605) total: 3.71s remaining: 19.4s
1606: learn: 17811.0326348 test: 27359.4211932 best: 27359.4211932 (1606) total: 3.71s remaining: 19.4s
1607: learn: 17810.3720620 test: 27358.8074988 best: 27358.8074988 (1607) total: 3.71s remaining: 19.4s
1608: learn: 17809.4327773 test: 27356.0159000 best: 27356.0159000 (1608) total: 3.71s remaining: 19.4s
1609: learn: 17809.1234161 test: 27356.0662977 best: 27356.0159000 (1608) total: 3.71s remaining: 19.4s
1610: learn: 17808.3771918 test: 27355.0972082 best: 27355.0972082 (1610) total: 3.72s remaining: 19.4s
1611: learn: 17808.0127370 test: 27354.6616295 best: 27354.6616295 (1611) total: 3.72s remaining: 19.4s
1612: learn: 17805.5500684 test: 27352.0301456 best: 27352.0301456 (1612) total: 3.72s remaining: 19.4s
1613: learn: 17804.9625777 test: 27351.9897950 best: 27351.9897950 (1613) total: 3.72s remaining: 19.3s
1614: learn: 17804.5436299 test: 27351.5417083 best: 27351.5417083 (1614) total: 3.73s remaining: 19.3s
1615: learn: 17804.1692096 test: 27351.5720013 best: 27351.5417083 (1614) total: 3.73s remaining: 19.3s
1616: learn: 17801.8294235 test: 27348.2754829 best: 27348.2754829 (1616) total: 3.73s remaining: 19.3s
1617: learn: 17800.4311659 test: 27347.2120361 best: 27347.2120361 (1617) total: 3.73s remaining: 19.3s
1618: learn: 17800.1266945 test: 27346.7927273 best: 27346.7927273 (1618) total: 3.73s remaining: 19.3s
1619: learn: 17799.7259311 test: 27346.3697054 best: 27346.3697054 (1619) total: 3.74s remaining: 19.3s
1620: learn: 17798.8832742 test: 27344.4382375 best: 27344.4382375 (1620) total: 3.74s remaining: 19.3s
1621: learn: 17798.4530558 test: 27344.0379765 best: 27344.0379765 (1621) total: 3.74s remaining: 19.3s
1622: learn: 17798.0519769 test: 27344.3091593 best: 27344.0379765 (1621) total: 3.74s remaining: 19.3s
1623: learn: 17789.8949746 test: 27345.2672890 best: 27344.0379765 (1621) total: 3.75s remaining: 19.3s
1624: learn: 17789.5186658 test: 27344.9772230 best: 27344.0379765 (1621) total: 3.75s remaining: 19.3s
1625: learn: 17787.2222298 test: 27345.9085478 best: 27344.0379765 (1621) total: 3.75s remaining: 19.3s
1626: learn: 17785.5728094 test: 27343.2986855 best: 27343.2986855 (1626) total: 3.75s remaining: 19.3s
1627: learn: 17774.5249091 test: 27340.0768941 best: 27340.0768941 (1627) total: 3.75s remaining: 19.3s
1628: learn: 17771.8063758 test: 27334.1619409 best: 27334.1619409 (1628) total: 3.76s remaining: 19.3s
1629: learn: 17769.9233923 test: 27333.5844587 best: 27333.5844587 (1629) total: 3.76s remaining: 19.3s
1630: learn: 17768.4331480 test: 27333.7439867 best: 27333.5844587 (1629) total: 3.76s remaining: 19.3s
1631: learn: 17768.0398640 test: 27333.7849163 best: 27333.5844587 (1629) total: 3.76s remaining: 19.3s
1632: learn: 17755.5883328 test: 27327.8603364 best: 27327.8603364 (1632) total: 3.77s remaining: 19.3s
1633: learn: 17749.5306222 test: 27328.1995154 best: 27327.8603364 (1632) total: 3.77s remaining: 19.3s
1634: learn: 17749.1461538 test: 27328.2393654 best: 27327.8603364 (1632) total: 3.77s remaining: 19.3s
1635: learn: 17748.8610529 test: 27327.8318770 best: 27327.8318770 (1635) total: 3.77s remaining: 19.3s
1636: learn: 17747.0004604 test: 27326.9642301 best: 27326.9642301 (1636) total: 3.77s remaining: 19.3s
1637: learn: 17743.9994620 test: 27320.0162540 best: 27320.0162540 (1637) total: 3.78s remaining: 19.3s
1638: learn: 17743.0090202 test: 27316.9369768 best: 27316.9369768 (1638) total: 3.78s remaining: 19.3s
1639: learn: 17729.0895982 test: 27310.2508194 best: 27310.2508194 (1639) total: 3.78s remaining: 19.3s
1640: learn: 17728.7575104 test: 27309.8399649 best: 27309.8399649 (1640) total: 3.78s remaining: 19.3s
1641: learn: 17727.2994977 test: 27309.0403912 best: 27309.0403912 (1641) total: 3.79s remaining: 19.3s
1642: learn: 17726.8987382 test: 27309.0423211 best: 27309.0403912 (1641) total: 3.79s remaining: 19.3s
1643: learn: 17718.6868343 test: 27308.0804599 best: 27308.0804599 (1643) total: 3.79s remaining: 19.3s
1644: learn: 17718.2685280 test: 27307.7456443 best: 27307.7456443 (1644) total: 3.79s remaining: 19.3s
1645: learn: 17702.9648382 test: 27299.4299974 best: 27299.4299974 (1645) total: 3.79s remaining: 19.3s
1646: learn: 17688.8492690 test: 27298.4309439 best: 27298.4309439 (1646) total: 3.8s remaining: 19.3s
1647: learn: 17676.4717142 test: 27294.8850109 best: 27294.8850109 (1647) total: 3.8s remaining: 19.3s
1648: learn: 17676.0034986 test: 27294.5853401 best: 27294.5853401 (1648) total: 3.8s remaining: 19.3s
1649: learn: 17675.6979963 test: 27294.1586711 best: 27294.1586711 (1649) total: 3.8s remaining: 19.2s
1650: learn: 17673.1046582 test: 27292.4015174 best: 27292.4015174 (1650) total: 3.81s remaining: 19.2s
1651: learn: 17666.0337824 test: 27288.9082950 best: 27288.9082950 (1651) total: 3.81s remaining: 19.2s
1652: learn: 17664.4737306 test: 27287.1492808 best: 27287.1492808 (1652) total: 3.81s remaining: 19.2s
1653: learn: 17663.9272471 test: 27286.6304188 best: 27286.6304188 (1653) total: 3.81s remaining: 19.2s
1654: learn: 17663.2403310 test: 27285.6445228 best: 27285.6445228 (1654) total: 3.82s remaining: 19.2s
1655: learn: 17662.7896321 test: 27285.3502407 best: 27285.3502407 (1655) total: 3.82s remaining: 19.2s
1656: learn: 17662.2226185 test: 27284.7851255 best: 27284.7851255 (1656) total: 3.82s remaining: 19.2s
1657: learn: 17651.0736526 test: 27278.1003522 best: 27278.1003522 (1657) total: 3.82s remaining: 19.2s
1658: learn: 17649.0689086 test: 27272.2278329 best: 27272.2278329 (1658) total: 3.83s remaining: 19.2s
1659: learn: 17648.6923345 test: 27271.4452754 best: 27271.4452754 (1659) total: 3.83s remaining: 19.2s
1660: learn: 17648.3264170 test: 27271.0349490 best: 27271.0349490 (1660) total: 3.83s remaining: 19.2s
1661: learn: 17634.1532756 test: 27270.5640995 best: 27270.5640995 (1661) total: 3.83s remaining: 19.2s
1662: learn: 17633.8700841 test: 27270.1506526 best: 27270.1506526 (1662) total: 3.83s remaining: 19.2s
1663: learn: 17633.3054432 test: 27269.5879138 best: 27269.5879138 (1663) total: 3.84s remaining: 19.2s
1664: learn: 17633.0238562 test: 27269.1699890 best: 27269.1699890 (1664) total: 3.84s remaining: 19.2s
1665: learn: 17632.5803043 test: 27268.8665650 best: 27268.8665650 (1665) total: 3.84s remaining: 19.2s
1666: learn: 17620.6648633 test: 27260.8144265 best: 27260.8144265 (1666) total: 3.84s remaining: 19.2s
1667: learn: 17618.1121828 test: 27259.0856616 best: 27259.0856616 (1667) total: 3.85s remaining: 19.2s
1668: learn: 17617.8367694 test: 27258.6971844 best: 27258.6971844 (1668) total: 3.85s remaining: 19.2s
1669: learn: 17617.4227061 test: 27258.3630819 best: 27258.3630819 (1669) total: 3.85s remaining: 19.2s
1670: learn: 17617.1288551 test: 27258.4336100 best: 27258.3630819 (1669) total: 3.85s remaining: 19.2s
1671: learn: 17616.8510092 test: 27258.0237441 best: 27258.0237441 (1671) total: 3.85s remaining: 19.2s
1672: learn: 17615.4953525 test: 27255.5926813 best: 27255.5926813 (1672) total: 3.86s remaining: 19.2s
1673: learn: 17601.5205967 test: 27251.2489584 best: 27251.2489584 (1673) total: 3.86s remaining: 19.2s
1674: learn: 17591.3816555 test: 27246.3183563 best: 27246.3183563 (1674) total: 3.86s remaining: 19.2s
1675: learn: 17590.7140508 test: 27245.3729169 best: 27245.3729169 (1675) total: 3.86s remaining: 19.2s
1676: learn: 17590.4412070 test: 27245.0228508 best: 27245.0228508 (1676) total: 3.87s remaining: 19.2s
1677: learn: 17590.0392127 test: 27244.6830912 best: 27244.6830912 (1677) total: 3.87s remaining: 19.2s
1678: learn: 17589.6557132 test: 27244.2401829 best: 27244.2401829 (1678) total: 3.87s remaining: 19.2s
1679: learn: 17587.9099046 test: 27241.0460991 best: 27241.0460991 (1679) total: 3.87s remaining: 19.2s
1680: learn: 17587.6457688 test: 27241.1761253 best: 27241.0460991 (1679) total: 3.87s remaining: 19.2s
1681: learn: 17587.2416563 test: 27240.7408317 best: 27240.7408317 (1681) total: 3.88s remaining: 19.2s
1682: learn: 17586.8963129 test: 27240.4497476 best: 27240.4497476 (1682) total: 3.88s remaining: 19.2s
1683: learn: 17586.5994794 test: 27240.2632302 best: 27240.2632302 (1683) total: 3.88s remaining: 19.2s
1684: learn: 17586.3128764 test: 27239.8958482 best: 27239.8958482 (1684) total: 3.88s remaining: 19.2s
1685: learn: 17583.1281595 test: 27237.8785598 best: 27237.8785598 (1685) total: 3.88s remaining: 19.2s
1686: learn: 17582.7938781 test: 27237.5437570 best: 27237.5437570 (1686) total: 3.89s remaining: 19.2s
1687: learn: 17567.8784088 test: 27230.4677482 best: 27230.4677482 (1687) total: 3.89s remaining: 19.1s
1688: learn: 17567.2101774 test: 27229.4909105 best: 27229.4909105 (1688) total: 3.89s remaining: 19.1s
1689: learn: 17566.7341990 test: 27229.4626785 best: 27229.4626785 (1689) total: 3.89s remaining: 19.1s
1690: learn: 17550.2277959 test: 27226.9943918 best: 27226.9943918 (1690) total: 3.9s remaining: 19.1s
1691: learn: 17549.7932256 test: 27226.6540188 best: 27226.6540188 (1691) total: 3.9s remaining: 19.1s
1692: learn: 17549.3660096 test: 27226.6931162 best: 27226.6540188 (1691) total: 3.9s remaining: 19.1s
1693: learn: 17533.8554754 test: 27220.1004599 best: 27220.1004599 (1693) total: 3.9s remaining: 19.1s
1694: learn: 17533.4775623 test: 27219.7620367 best: 27219.7620367 (1694) total: 3.9s remaining: 19.1s
1695: learn: 17533.1866671 test: 27219.3875842 best: 27219.3875842 (1695) total: 3.91s remaining: 19.1s
1696: learn: 17532.8228000 test: 27218.9177079 best: 27218.9177079 (1696) total: 3.91s remaining: 19.1s
1697: learn: 17517.9223761 test: 27220.3399330 best: 27218.9177079 (1696) total: 3.91s remaining: 19.1s
1698: learn: 17513.3665613 test: 27214.6005789 best: 27214.6005789 (1698) total: 3.91s remaining: 19.1s
1699: learn: 17502.3018549 test: 27207.7276918 best: 27207.7276918 (1699) total: 3.92s remaining: 19.1s
1700: learn: 17500.8519401 test: 27207.0175379 best: 27207.0175379 (1700) total: 3.92s remaining: 19.1s
1701: learn: 17499.3471804 test: 27205.7540880 best: 27205.7540880 (1701) total: 3.92s remaining: 19.1s
1702: learn: 17498.9470524 test: 27205.0570298 best: 27205.0570298 (1702) total: 3.92s remaining: 19.1s
1703: learn: 17489.4914573 test: 27203.8429921 best: 27203.8429921 (1703) total: 3.92s remaining: 19.1s
1704: learn: 17489.1431210 test: 27203.5174747 best: 27203.5174747 (1704) total: 3.93s remaining: 19.1s
1705: learn: 17484.4206817 test: 27202.8586427 best: 27202.8586427 (1705) total: 3.93s remaining: 19.1s
1706: learn: 17483.7057978 test: 27201.7386174 best: 27201.7386174 (1706) total: 3.93s remaining: 19.1s
1707: learn: 17483.4286000 test: 27201.6092360 best: 27201.6092360 (1707) total: 3.93s remaining: 19.1s
1708: learn: 17483.1378623 test: 27201.7355021 best: 27201.6092360 (1707) total: 3.94s remaining: 19.1s
1709: learn: 17482.7805375 test: 27201.4869193 best: 27201.4869193 (1709) total: 3.94s remaining: 19.1s
1710: learn: 17478.0604514 test: 27195.2829608 best: 27195.2829608 (1710) total: 3.94s remaining: 19.1s
1711: learn: 17477.6694753 test: 27195.1304503 best: 27195.1304503 (1711) total: 3.94s remaining: 19.1s
1712: learn: 17475.8554456 test: 27194.6429040 best: 27194.6429040 (1712) total: 3.94s remaining: 19.1s
1713: learn: 17475.1059724 test: 27194.6871985 best: 27194.6429040 (1712) total: 3.95s remaining: 19.1s
1714: learn: 17473.4100615 test: 27191.5353087 best: 27191.5353087 (1714) total: 3.95s remaining: 19.1s
1715: learn: 17460.7756630 test: 27191.3778553 best: 27191.3778553 (1715) total: 3.95s remaining: 19.1s
1716: learn: 17460.5100351 test: 27191.4208671 best: 27191.3778553 (1715) total: 3.95s remaining: 19.1s
1717: learn: 17458.6718203 test: 27185.7740573 best: 27185.7740573 (1717) total: 3.96s remaining: 19.1s
1718: learn: 17458.3969691 test: 27185.2437930 best: 27185.2437930 (1718) total: 3.96s remaining: 19.1s
1719: learn: 17458.1171482 test: 27185.3627358 best: 27185.2437930 (1718) total: 3.96s remaining: 19.1s
1720: learn: 17451.9166289 test: 27183.5303212 best: 27183.5303212 (1720) total: 3.96s remaining: 19.1s
1721: learn: 17439.8323016 test: 27177.9335771 best: 27177.9335771 (1721) total: 3.97s remaining: 19.1s
1722: learn: 17439.5231338 test: 27178.0286232 best: 27177.9335771 (1721) total: 3.97s remaining: 19.1s
1723: learn: 17438.1441927 test: 27176.5127533 best: 27176.5127533 (1723) total: 3.97s remaining: 19.1s
1724: learn: 17436.0691993 test: 27173.7480350 best: 27173.7480350 (1724) total: 3.97s remaining: 19.1s
1725: learn: 17435.7600260 test: 27173.7932591 best: 27173.7480350 (1724) total: 3.98s remaining: 19.1s
1726: learn: 17435.2831220 test: 27173.0577241 best: 27173.0577241 (1726) total: 3.98s remaining: 19.1s
1727: learn: 17434.9919862 test: 27172.6654420 best: 27172.6654420 (1727) total: 3.98s remaining: 19.1s
1728: learn: 17430.6326429 test: 27167.7373943 best: 27167.7373943 (1728) total: 3.98s remaining: 19.1s
1729: learn: 17429.3401098 test: 27165.4157877 best: 27165.4157877 (1729) total: 3.98s remaining: 19s
1730: learn: 17414.0792822 test: 27167.2681921 best: 27165.4157877 (1729) total: 3.99s remaining: 19s
1731: learn: 17413.7279115 test: 27166.7393316 best: 27165.4157877 (1729) total: 3.99s remaining: 19s
1732: learn: 17403.5636883 test: 27165.0689036 best: 27165.0689036 (1732) total: 3.99s remaining: 19s
1733: learn: 17402.8835842 test: 27165.0012382 best: 27165.0012382 (1733) total: 3.99s remaining: 19s
1734: learn: 17402.5589517 test: 27165.0986209 best: 27165.0012382 (1733) total: 4s remaining: 19s
1735: learn: 17401.4629587 test: 27164.6779729 best: 27164.6779729 (1735) total: 4s remaining: 19s
1736: learn: 17401.1830714 test: 27164.2823717 best: 27164.2823717 (1736) total: 4s remaining: 19s
1737: learn: 17398.4644426 test: 27157.8031947 best: 27157.8031947 (1737) total: 4s remaining: 19s
1738: learn: 17398.1286206 test: 27157.9037627 best: 27157.8031947 (1737) total: 4.01s remaining: 19s
1739: learn: 17397.7453864 test: 27157.1863408 best: 27157.1863408 (1739) total: 4.01s remaining: 19s
1740: learn: 17396.4528122 test: 27156.5590384 best: 27156.5590384 (1740) total: 4.01s remaining: 19s
1741: learn: 17395.9311478 test: 27155.9828335 best: 27155.9828335 (1741) total: 4.01s remaining: 19s
1742: learn: 17395.5512879 test: 27154.6197791 best: 27154.6197791 (1742) total: 4.02s remaining: 19s
1743: learn: 17395.2058475 test: 27154.2369562 best: 27154.2369562 (1743) total: 4.02s remaining: 19s
1744: learn: 17393.5993923 test: 27151.4422845 best: 27151.4422845 (1744) total: 4.02s remaining: 19s
1745: learn: 17393.1875672 test: 27151.1368334 best: 27151.1368334 (1745) total: 4.02s remaining: 19s
1746: learn: 17390.2805594 test: 27151.5606557 best: 27151.1368334 (1745) total: 4.03s remaining: 19s
1747: learn: 17388.1975988 test: 27147.3743546 best: 27147.3743546 (1747) total: 4.03s remaining: 19s
1748: learn: 17386.7628912 test: 27145.1605945 best: 27145.1605945 (1748) total: 4.03s remaining: 19s
1749: learn: 17386.4994862 test: 27144.8047294 best: 27144.8047294 (1749) total: 4.03s remaining: 19s
1750: learn: 17374.4498300 test: 27132.6156505 best: 27132.6156505 (1750) total: 4.03s remaining: 19s
1751: learn: 17373.8658345 test: 27132.6369966 best: 27132.6156505 (1750) total: 4.04s remaining: 19s
1752: learn: 17373.3547808 test: 27131.8272807 best: 27131.8272807 (1752) total: 4.04s remaining: 19s
1753: learn: 17373.0756745 test: 27131.9435470 best: 27131.8272807 (1752) total: 4.04s remaining: 19s
1754: learn: 17371.8150955 test: 27129.6613993 best: 27129.6613993 (1754) total: 4.04s remaining: 19s
1755: learn: 17370.1513968 test: 27128.2653782 best: 27128.2653782 (1755) total: 4.04s remaining: 19s
1756: learn: 17359.4377654 test: 27122.4657529 best: 27122.4657529 (1756) total: 4.05s remaining: 19s
1757: learn: 17358.2372000 test: 27120.1876039 best: 27120.1876039 (1757) total: 4.05s remaining: 19s
1758: learn: 17353.1797642 test: 27116.6508045 best: 27116.6508045 (1758) total: 4.05s remaining: 19s
1759: learn: 17342.5507338 test: 27110.1332089 best: 27110.1332089 (1759) total: 4.05s remaining: 19s
1760: learn: 17341.3318368 test: 27107.8950028 best: 27107.8950028 (1760) total: 4.06s remaining: 19s
1761: learn: 17338.5252924 test: 27104.0523336 best: 27104.0523336 (1761) total: 4.06s remaining: 19s
1762: learn: 17337.8484454 test: 27102.9661881 best: 27102.9661881 (1762) total: 4.06s remaining: 19s
1763: learn: 17330.4384166 test: 27103.4224832 best: 27102.9661881 (1762) total: 4.06s remaining: 19s
1764: learn: 17327.4379993 test: 27101.5554639 best: 27101.5554639 (1764) total: 4.07s remaining: 19s
1765: learn: 17325.7293049 test: 27100.3076574 best: 27100.3076574 (1765) total: 4.07s remaining: 19s
1766: learn: 17324.1137173 test: 27097.5725136 best: 27097.5725136 (1766) total: 4.07s remaining: 19s
1767: learn: 17322.8373735 test: 27096.4378405 best: 27096.4378405 (1767) total: 4.07s remaining: 19s
1768: learn: 17321.5649030 test: 27095.3091787 best: 27095.3091787 (1768) total: 4.07s remaining: 19s
1769: learn: 17320.1194566 test: 27095.4249446 best: 27095.3091787 (1768) total: 4.08s remaining: 19s
1770: learn: 17318.3603459 test: 27093.4448083 best: 27093.4448083 (1770) total: 4.08s remaining: 19s
1771: learn: 17316.9471566 test: 27091.6257632 best: 27091.6257632 (1771) total: 4.08s remaining: 18.9s
1772: learn: 17312.2075096 test: 27086.0864288 best: 27086.0864288 (1772) total: 4.08s remaining: 18.9s
1773: learn: 17311.9487717 test: 27085.7302052 best: 27085.7302052 (1773) total: 4.08s remaining: 18.9s
1774: learn: 17311.4586926 test: 27085.2116816 best: 27085.2116816 (1774) total: 4.09s remaining: 18.9s
1775: learn: 17310.2051900 test: 27082.5115279 best: 27082.5115279 (1775) total: 4.09s remaining: 18.9s
1776: learn: 17299.5053774 test: 27077.2877941 best: 27077.2877941 (1776) total: 4.09s remaining: 18.9s
1777: learn: 17298.2202083 test: 27074.4443831 best: 27074.4443831 (1777) total: 4.09s remaining: 18.9s
1778: learn: 17297.9902622 test: 27073.1664127 best: 27073.1664127 (1778) total: 4.09s remaining: 18.9s
1779: learn: 17297.4002018 test: 27072.2950682 best: 27072.2950682 (1779) total: 4.1s remaining: 18.9s
1780: learn: 17286.0333635 test: 27064.2989129 best: 27064.2989129 (1780) total: 4.1s remaining: 18.9s
1781: learn: 17283.8439259 test: 27064.9910647 best: 27064.2989129 (1780) total: 4.1s remaining: 18.9s
1782: learn: 17283.5903916 test: 27065.1162896 best: 27064.2989129 (1780) total: 4.1s remaining: 18.9s
1783: learn: 17281.8265536 test: 27061.9608688 best: 27061.9608688 (1783) total: 4.11s remaining: 18.9s
1784: learn: 17278.4994719 test: 27056.0774511 best: 27056.0774511 (1784) total: 4.11s remaining: 18.9s
1785: learn: 17278.1832840 test: 27056.2286701 best: 27056.0774511 (1784) total: 4.11s remaining: 18.9s
1786: learn: 17264.4027241 test: 27050.5800462 best: 27050.5800462 (1786) total: 4.11s remaining: 18.9s
1787: learn: 17263.1958283 test: 27048.4140058 best: 27048.4140058 (1787) total: 4.12s remaining: 18.9s
1788: learn: 17252.4282185 test: 27040.6673068 best: 27040.6673068 (1788) total: 4.12s remaining: 18.9s
1789: learn: 17238.9406274 test: 27034.9107916 best: 27034.9107916 (1789) total: 4.12s remaining: 18.9s
1790: learn: 17237.7513433 test: 27032.8115653 best: 27032.8115653 (1790) total: 4.12s remaining: 18.9s
1791: learn: 17236.6306336 test: 27029.2281750 best: 27029.2281750 (1791) total: 4.12s remaining: 18.9s
1792: learn: 17234.8453209 test: 27030.0642605 best: 27029.2281750 (1791) total: 4.13s remaining: 18.9s
1793: learn: 17234.5444794 test: 27028.8050598 best: 27028.8050598 (1793) total: 4.13s remaining: 18.9s
1794: learn: 17233.3610340 test: 27026.7173075 best: 27026.7173075 (1794) total: 4.13s remaining: 18.9s
1795: learn: 17223.3071689 test: 27022.1502994 best: 27022.1502994 (1795) total: 4.13s remaining: 18.9s
1796: learn: 17222.2551856 test: 27020.9595002 best: 27020.9595002 (1796) total: 4.13s remaining: 18.9s
1797: learn: 17208.6262500 test: 27020.2312596 best: 27020.2312596 (1797) total: 4.14s remaining: 18.9s
1798: learn: 17197.7849361 test: 27021.1625887 best: 27020.2312596 (1797) total: 4.14s remaining: 18.9s
1799: learn: 17197.3070585 test: 27020.5662645 best: 27020.2312596 (1797) total: 4.14s remaining: 18.9s
1800: learn: 17196.1149119 test: 27018.6105061 best: 27018.6105061 (1800) total: 4.14s remaining: 18.9s
1801: learn: 17194.9030617 test: 27016.4446176 best: 27016.4446176 (1801) total: 4.15s remaining: 18.9s
1802: learn: 17182.0194753 test: 27011.1644388 best: 27011.1644388 (1802) total: 4.15s remaining: 18.9s
1803: learn: 17171.2204532 test: 27005.6329176 best: 27005.6329176 (1803) total: 4.15s remaining: 18.9s
1804: learn: 17170.8738519 test: 27005.6767908 best: 27005.6329176 (1803) total: 4.15s remaining: 18.9s
1805: learn: 17161.6981787 test: 27003.0541213 best: 27003.0541213 (1805) total: 4.16s remaining: 18.9s
1806: learn: 17148.1099766 test: 27004.9512893 best: 27003.0541213 (1805) total: 4.16s remaining: 18.9s
1807: learn: 17147.7200309 test: 27004.6428203 best: 27003.0541213 (1805) total: 4.16s remaining: 18.8s
1808: learn: 17147.3421725 test: 27004.3279516 best: 27003.0541213 (1805) total: 4.16s remaining: 18.8s
1809: learn: 17146.9378461 test: 27003.9066863 best: 27003.0541213 (1805) total: 4.16s remaining: 18.8s
1810: learn: 17144.1048803 test: 27000.5329040 best: 27000.5329040 (1810) total: 4.17s remaining: 18.8s
1811: learn: 17134.3828356 test: 26998.4403408 best: 26998.4403408 (1811) total: 4.17s remaining: 18.8s
1812: learn: 17133.0687454 test: 26996.4059443 best: 26996.4059443 (1812) total: 4.17s remaining: 18.8s
1813: learn: 17131.8572887 test: 26994.2587680 best: 26994.2587680 (1813) total: 4.17s remaining: 18.8s
1814: learn: 17131.5120786 test: 26994.2160169 best: 26994.2160169 (1814) total: 4.17s remaining: 18.8s
1815: learn: 17131.2321432 test: 26994.2634471 best: 26994.2160169 (1814) total: 4.18s remaining: 18.8s
1816: learn: 17119.9836601 test: 26989.9363857 best: 26989.9363857 (1816) total: 4.18s remaining: 18.8s
1817: learn: 17119.3795887 test: 26989.4273976 best: 26989.4273976 (1817) total: 4.18s remaining: 18.8s
1818: learn: 17119.0539573 test: 26989.4101311 best: 26989.4101311 (1818) total: 4.18s remaining: 18.8s
1819: learn: 17118.7097611 test: 26989.3907551 best: 26989.3907551 (1819) total: 4.19s remaining: 18.8s
1820: learn: 17108.0032901 test: 26986.0185586 best: 26986.0185586 (1820) total: 4.19s remaining: 18.8s
1821: learn: 17106.9869974 test: 26984.8677320 best: 26984.8677320 (1821) total: 4.19s remaining: 18.8s
1822: learn: 17094.6186453 test: 26979.0139066 best: 26979.0139066 (1822) total: 4.19s remaining: 18.8s
1823: learn: 17090.1042237 test: 26973.8026035 best: 26973.8026035 (1823) total: 4.2s remaining: 18.8s
1824: learn: 17086.5852302 test: 26973.0499228 best: 26973.0499228 (1824) total: 4.2s remaining: 18.8s
1825: learn: 17083.4691100 test: 26967.4460573 best: 26967.4460573 (1825) total: 4.2s remaining: 18.8s
1826: learn: 17082.9547083 test: 26966.9699789 best: 26966.9699789 (1826) total: 4.2s remaining: 18.8s
1827: learn: 17069.9483247 test: 26969.0053774 best: 26966.9699789 (1826) total: 4.21s remaining: 18.8s
1828: learn: 17061.4278614 test: 26968.1295289 best: 26966.9699789 (1826) total: 4.21s remaining: 18.8s
1829: learn: 17052.1960553 test: 26957.9989598 best: 26957.9989598 (1829) total: 4.21s remaining: 18.8s
1830: learn: 17051.0873968 test: 26955.9484472 best: 26955.9484472 (1830) total: 4.21s remaining: 18.8s
1831: learn: 17050.0122937 test: 26953.9488269 best: 26953.9488269 (1831) total: 4.21s remaining: 18.8s
1832: learn: 17042.7813815 test: 26949.4005410 best: 26949.4005410 (1832) total: 4.22s remaining: 18.8s
1833: learn: 17042.4117515 test: 26949.1008702 best: 26949.1008702 (1833) total: 4.22s remaining: 18.8s
1834: learn: 17032.6362021 test: 26943.2206878 best: 26943.2206878 (1834) total: 4.22s remaining: 18.8s
1835: learn: 17017.8499448 test: 26940.5433447 best: 26940.5433447 (1835) total: 4.22s remaining: 18.8s
1836: learn: 17009.4203483 test: 26932.7163555 best: 26932.7163555 (1836) total: 4.22s remaining: 18.8s
1837: learn: 16998.7100726 test: 26929.6100125 best: 26929.6100125 (1837) total: 4.23s remaining: 18.8s
1838: learn: 16994.3304040 test: 26929.3597468 best: 26929.3597468 (1838) total: 4.23s remaining: 18.8s
1839: learn: 16993.8994960 test: 26929.7950408 best: 26929.3597468 (1838) total: 4.23s remaining: 18.8s
1840: learn: 16993.4061123 test: 26929.8832821 best: 26929.3597468 (1838) total: 4.23s remaining: 18.8s
1841: learn: 16993.0673636 test: 26929.2128247 best: 26929.2128247 (1841) total: 4.24s remaining: 18.8s
1842: learn: 16984.8000233 test: 26926.7626294 best: 26926.7626294 (1842) total: 4.24s remaining: 18.8s
1843: learn: 16977.1510537 test: 26922.5201781 best: 26922.5201781 (1843) total: 4.24s remaining: 18.8s
1844: learn: 16973.3943535 test: 26919.7438041 best: 26919.7438041 (1844) total: 4.24s remaining: 18.8s
1845: learn: 16963.7318765 test: 26912.7322155 best: 26912.7322155 (1845) total: 4.24s remaining: 18.8s
1846: learn: 16954.2737765 test: 26912.6044935 best: 26912.6044935 (1846) total: 4.25s remaining: 18.7s
1847: learn: 16947.3276496 test: 26914.4470232 best: 26912.6044935 (1846) total: 4.25s remaining: 18.7s
1848: learn: 16943.1039779 test: 26911.7708680 best: 26911.7708680 (1848) total: 4.25s remaining: 18.7s
1849: learn: 16935.7670008 test: 26913.5332809 best: 26911.7708680 (1848) total: 4.25s remaining: 18.7s
1850: learn: 16931.5481854 test: 26908.7465194 best: 26908.7465194 (1850) total: 4.26s remaining: 18.7s
1851: learn: 16930.9548773 test: 26907.7466930 best: 26907.7466930 (1851) total: 4.26s remaining: 18.7s
1852: learn: 16921.0762499 test: 26903.5278555 best: 26903.5278555 (1852) total: 4.26s remaining: 18.7s
1853: learn: 16916.5403838 test: 26900.1235362 best: 26900.1235362 (1853) total: 4.26s remaining: 18.7s
1854: learn: 16916.2246493 test: 26899.8289849 best: 26899.8289849 (1854) total: 4.26s remaining: 18.7s
1855: learn: 16910.0581606 test: 26894.4641314 best: 26894.4641314 (1855) total: 4.27s remaining: 18.7s
1856: learn: 16905.8039369 test: 26889.5571041 best: 26889.5571041 (1856) total: 4.27s remaining: 18.7s
1857: learn: 16904.0818058 test: 26887.1713671 best: 26887.1713671 (1857) total: 4.27s remaining: 18.7s
1858: learn: 16902.0951250 test: 26881.7283784 best: 26881.7283784 (1858) total: 4.27s remaining: 18.7s
1859: learn: 16901.7222899 test: 26881.4302426 best: 26881.4302426 (1859) total: 4.28s remaining: 18.7s
1860: learn: 16894.3835267 test: 26872.5808701 best: 26872.5808701 (1860) total: 4.28s remaining: 18.7s
1861: learn: 16894.1792024 test: 26872.6943801 best: 26872.5808701 (1860) total: 4.28s remaining: 18.7s
1862: learn: 16881.8233721 test: 26867.5178948 best: 26867.5178948 (1862) total: 4.28s remaining: 18.7s
1863: learn: 16880.9474244 test: 26866.9869479 best: 26866.9869479 (1863) total: 4.28s remaining: 18.7s
1864: learn: 16868.9341634 test: 26861.9531576 best: 26861.9531576 (1864) total: 4.29s remaining: 18.7s
1865: learn: 16859.0290465 test: 26857.7225412 best: 26857.7225412 (1865) total: 4.29s remaining: 18.7s
1866: learn: 16858.6778568 test: 26857.4349516 best: 26857.4349516 (1866) total: 4.29s remaining: 18.7s
1867: learn: 16855.6075360 test: 26853.1831551 best: 26853.1831551 (1867) total: 4.29s remaining: 18.7s
1868: learn: 16852.0067232 test: 26850.5367981 best: 26850.5367981 (1868) total: 4.29s remaining: 18.7s
1869: learn: 16844.2403739 test: 26848.8198566 best: 26848.8198566 (1869) total: 4.3s remaining: 18.7s
1870: learn: 16843.3884113 test: 26847.8976039 best: 26847.8976039 (1870) total: 4.3s remaining: 18.7s
1871: learn: 16835.6495825 test: 26845.6926970 best: 26845.6926970 (1871) total: 4.3s remaining: 18.7s
1872: learn: 16822.0910667 test: 26838.3766639 best: 26838.3766639 (1872) total: 4.3s remaining: 18.7s
1873: learn: 16821.7722459 test: 26838.0949683 best: 26838.0949683 (1873) total: 4.3s remaining: 18.7s
1874: learn: 16815.7328376 test: 26836.7396784 best: 26836.7396784 (1874) total: 4.31s remaining: 18.7s
1875: learn: 16809.5133698 test: 26833.1381975 best: 26833.1381975 (1875) total: 4.31s remaining: 18.7s
1876: learn: 16808.5024674 test: 26831.5657589 best: 26831.5657589 (1876) total: 4.31s remaining: 18.7s
1877: learn: 16804.9998273 test: 26827.2749580 best: 26827.2749580 (1877) total: 4.31s remaining: 18.7s
1878: learn: 16803.9483378 test: 26826.4400192 best: 26826.4400192 (1878) total: 4.32s remaining: 18.7s
1879: learn: 16803.7357457 test: 26826.3778589 best: 26826.3778589 (1879) total: 4.32s remaining: 18.7s
1880: learn: 16803.3802405 test: 26826.0949155 best: 26826.0949155 (1880) total: 4.32s remaining: 18.7s
1881: learn: 16795.1860064 test: 26818.5786833 best: 26818.5786833 (1881) total: 4.32s remaining: 18.6s
1882: learn: 16786.9654979 test: 26816.4717545 best: 26816.4717545 (1882) total: 4.33s remaining: 18.6s
1883: learn: 16777.9128180 test: 26812.0381827 best: 26812.0381827 (1883) total: 4.33s remaining: 18.6s
1884: learn: 16776.4068890 test: 26810.9696734 best: 26810.9696734 (1884) total: 4.33s remaining: 18.6s
1885: learn: 16775.3521958 test: 26810.0722598 best: 26810.0722598 (1885) total: 4.33s remaining: 18.6s
1886: learn: 16775.0453135 test: 26810.0782930 best: 26810.0722598 (1885) total: 4.33s remaining: 18.6s
1887: learn: 16773.0807776 test: 26810.8253102 best: 26810.0722598 (1885) total: 4.34s remaining: 18.6s
1888: learn: 16772.6563729 test: 26810.9717154 best: 26810.0722598 (1885) total: 4.34s remaining: 18.6s
1889: learn: 16764.9336383 test: 26809.9963780 best: 26809.9963780 (1889) total: 4.34s remaining: 18.6s
1890: learn: 16759.0710037 test: 26804.9173120 best: 26804.9173120 (1890) total: 4.34s remaining: 18.6s
1891: learn: 16757.9709116 test: 26802.3448101 best: 26802.3448101 (1891) total: 4.34s remaining: 18.6s
1892: learn: 16756.0549236 test: 26802.7532624 best: 26802.3448101 (1891) total: 4.35s remaining: 18.6s
1893: learn: 16749.7234072 test: 26802.7579849 best: 26802.3448101 (1891) total: 4.35s remaining: 18.6s
1894: learn: 16747.8179292 test: 26803.1645498 best: 26802.3448101 (1891) total: 4.35s remaining: 18.6s
1895: learn: 16737.7728702 test: 26800.4503228 best: 26800.4503228 (1895) total: 4.35s remaining: 18.6s
1896: learn: 16737.5262281 test: 26800.1639600 best: 26800.1639600 (1896) total: 4.36s remaining: 18.6s
1897: learn: 16735.7684271 test: 26797.8154730 best: 26797.8154730 (1897) total: 4.36s remaining: 18.6s
1898: learn: 16734.6048371 test: 26797.0888397 best: 26797.0888397 (1898) total: 4.36s remaining: 18.6s
1899: learn: 16720.2821214 test: 26789.1211251 best: 26789.1211251 (1899) total: 4.36s remaining: 18.6s
1900: learn: 16711.4017060 test: 26783.5796393 best: 26783.5796393 (1900) total: 4.37s remaining: 18.6s
1901: learn: 16708.8410830 test: 26781.5621310 best: 26781.5621310 (1901) total: 4.37s remaining: 18.6s
1902: learn: 16697.6505440 test: 26777.1635642 best: 26777.1635642 (1902) total: 4.37s remaining: 18.6s
1903: learn: 16696.2828022 test: 26774.3387480 best: 26774.3387480 (1903) total: 4.37s remaining: 18.6s
1904: learn: 16690.2031387 test: 26769.5936572 best: 26769.5936572 (1904) total: 4.37s remaining: 18.6s
1905: learn: 16680.0229505 test: 26762.5810002 best: 26762.5810002 (1905) total: 4.38s remaining: 18.6s
1906: learn: 16677.5853363 test: 26762.3194914 best: 26762.3194914 (1906) total: 4.38s remaining: 18.6s
1907: learn: 16677.2524566 test: 26762.0233563 best: 26762.0233563 (1907) total: 4.38s remaining: 18.6s
1908: learn: 16669.8284828 test: 26760.3714313 best: 26760.3714313 (1908) total: 4.38s remaining: 18.6s
1909: learn: 16665.8768312 test: 26755.7798948 best: 26755.7798948 (1909) total: 4.38s remaining: 18.6s
1910: learn: 16656.7029154 test: 26757.2082799 best: 26755.7798948 (1909) total: 4.39s remaining: 18.6s
1911: learn: 16652.6836312 test: 26754.6386014 best: 26754.6386014 (1911) total: 4.39s remaining: 18.6s
1912: learn: 16651.6590333 test: 26753.9555279 best: 26753.9555279 (1912) total: 4.39s remaining: 18.6s
1913: learn: 16643.7984631 test: 26752.4300313 best: 26752.4300313 (1913) total: 4.39s remaining: 18.6s
1914: learn: 16639.7996871 test: 26749.6772903 best: 26749.6772903 (1914) total: 4.4s remaining: 18.6s
1915: learn: 16633.9247319 test: 26745.5808616 best: 26745.5808616 (1915) total: 4.4s remaining: 18.6s
1916: learn: 16632.1049619 test: 26743.6976278 best: 26743.6976278 (1916) total: 4.4s remaining: 18.6s
1917: learn: 16625.2332490 test: 26745.5727049 best: 26743.6976278 (1916) total: 4.4s remaining: 18.6s
1918: learn: 16613.3375220 test: 26741.6831755 best: 26741.6831755 (1918) total: 4.41s remaining: 18.6s
1919: learn: 16608.6690137 test: 26738.9808577 best: 26738.9808577 (1919) total: 4.41s remaining: 18.5s
1920: learn: 16598.3767949 test: 26734.8016849 best: 26734.8016849 (1920) total: 4.41s remaining: 18.5s
1921: learn: 16592.1826191 test: 26734.6634170 best: 26734.6634170 (1921) total: 4.41s remaining: 18.5s
1922: learn: 16584.3187584 test: 26734.8059018 best: 26734.6634170 (1921) total: 4.41s remaining: 18.5s
1923: learn: 16584.0159137 test: 26734.5261542 best: 26734.5261542 (1923) total: 4.42s remaining: 18.5s
1924: learn: 16583.0005898 test: 26732.7504431 best: 26732.7504431 (1924) total: 4.42s remaining: 18.5s
1925: learn: 16572.7975692 test: 26729.8273964 best: 26729.8273964 (1925) total: 4.42s remaining: 18.5s
1926: learn: 16563.3658441 test: 26730.5663191 best: 26729.8273964 (1925) total: 4.42s remaining: 18.5s
1927: learn: 16561.4083345 test: 26731.2653045 best: 26729.8273964 (1925) total: 4.42s remaining: 18.5s
1928: learn: 16560.2034265 test: 26730.5396938 best: 26729.8273964 (1925) total: 4.43s remaining: 18.5s
1929: learn: 16559.7570248 test: 26729.4321039 best: 26729.4321039 (1929) total: 4.43s remaining: 18.5s
1930: learn: 16555.8254132 test: 26730.3552571 best: 26729.4321039 (1929) total: 4.43s remaining: 18.5s
1931: learn: 16552.8153341 test: 26731.5537706 best: 26729.4321039 (1929) total: 4.43s remaining: 18.5s
1932: learn: 16543.7846858 test: 26728.6894150 best: 26728.6894150 (1932) total: 4.44s remaining: 18.5s
1933: learn: 16532.5404291 test: 26723.6401439 best: 26723.6401439 (1933) total: 4.44s remaining: 18.5s
1934: learn: 16531.3587555 test: 26719.1043874 best: 26719.1043874 (1934) total: 4.44s remaining: 18.5s
1935: learn: 16521.9837483 test: 26721.7881920 best: 26719.1043874 (1934) total: 4.44s remaining: 18.5s
1936: learn: 16521.7895934 test: 26721.4539864 best: 26719.1043874 (1934) total: 4.45s remaining: 18.5s
1937: learn: 16513.0685148 test: 26718.4696771 best: 26718.4696771 (1937) total: 4.45s remaining: 18.5s
1938: learn: 16508.7755307 test: 26717.1685404 best: 26717.1685404 (1938) total: 4.45s remaining: 18.5s
1939: learn: 16498.7558297 test: 26710.8543912 best: 26710.8543912 (1939) total: 4.45s remaining: 18.5s
1940: learn: 16498.5032552 test: 26710.6886873 best: 26710.6886873 (1940) total: 4.45s remaining: 18.5s
1941: learn: 16498.1719316 test: 26710.4167827 best: 26710.4167827 (1941) total: 4.46s remaining: 18.5s
1942: learn: 16497.9405882 test: 26710.4964193 best: 26710.4167827 (1941) total: 4.46s remaining: 18.5s
1943: learn: 16489.0553588 test: 26703.6088650 best: 26703.6088650 (1943) total: 4.46s remaining: 18.5s
1944: learn: 16488.7290421 test: 26703.3516156 best: 26703.3516156 (1944) total: 4.46s remaining: 18.5s
1945: learn: 16480.4022735 test: 26699.8727292 best: 26699.8727292 (1945) total: 4.46s remaining: 18.5s
1946: learn: 16471.9050432 test: 26697.8758450 best: 26697.8758450 (1946) total: 4.47s remaining: 18.5s
1947: learn: 16461.7979721 test: 26694.0482293 best: 26694.0482293 (1947) total: 4.47s remaining: 18.5s
1948: learn: 16456.5386467 test: 26691.0612508 best: 26691.0612508 (1948) total: 4.47s remaining: 18.5s
1949: learn: 16455.0426739 test: 26689.5517464 best: 26689.5517464 (1949) total: 4.47s remaining: 18.5s
1950: learn: 16454.0373244 test: 26687.6690326 best: 26687.6690326 (1950) total: 4.48s remaining: 18.5s
1951: learn: 16453.7143351 test: 26687.4166147 best: 26687.4166147 (1951) total: 4.48s remaining: 18.5s
1952: learn: 16452.5103582 test: 26684.1741106 best: 26684.1741106 (1952) total: 4.48s remaining: 18.5s
1953: learn: 16443.4428373 test: 26685.2211788 best: 26684.1741106 (1952) total: 4.48s remaining: 18.5s
1954: learn: 16435.2598262 test: 26682.7402061 best: 26682.7402061 (1954) total: 4.49s remaining: 18.5s
1955: learn: 16426.0647487 test: 26678.9359170 best: 26678.9359170 (1955) total: 4.49s remaining: 18.5s
1956: learn: 16424.5371669 test: 26674.7829090 best: 26674.7829090 (1956) total: 4.49s remaining: 18.5s
1957: learn: 16418.8591126 test: 26669.6985116 best: 26669.6985116 (1957) total: 4.49s remaining: 18.5s
1958: learn: 16409.9002014 test: 26666.9829812 best: 26666.9829812 (1958) total: 4.49s remaining: 18.4s
1959: learn: 16408.6636848 test: 26665.8227729 best: 26665.8227729 (1959) total: 4.5s remaining: 18.4s
1960: learn: 16403.9088744 test: 26665.6204711 best: 26665.6204711 (1960) total: 4.5s remaining: 18.4s
1961: learn: 16402.5460182 test: 26666.6900035 best: 26665.6204711 (1960) total: 4.5s remaining: 18.4s
1962: learn: 16394.0699803 test: 26662.2316395 best: 26662.2316395 (1962) total: 4.5s remaining: 18.4s
1963: learn: 16387.3107444 test: 26660.0417138 best: 26660.0417138 (1963) total: 4.5s remaining: 18.4s
1964: learn: 16377.2297694 test: 26658.9119518 best: 26658.9119518 (1964) total: 4.51s remaining: 18.4s
1965: learn: 16376.7605956 test: 26658.9258315 best: 26658.9119518 (1964) total: 4.51s remaining: 18.4s
1966: learn: 16375.7956928 test: 26657.3003952 best: 26657.3003952 (1966) total: 4.51s remaining: 18.4s
1967: learn: 16375.4999425 test: 26657.2202440 best: 26657.2202440 (1967) total: 4.51s remaining: 18.4s
1968: learn: 16369.8396197 test: 26655.5090728 best: 26655.5090728 (1968) total: 4.52s remaining: 18.4s
1969: learn: 16369.5892531 test: 26655.2723959 best: 26655.2723959 (1969) total: 4.52s remaining: 18.4s
1970: learn: 16365.0981983 test: 26657.1264686 best: 26655.2723959 (1969) total: 4.52s remaining: 18.4s
1971: learn: 16364.6454729 test: 26656.4108354 best: 26655.2723959 (1969) total: 4.52s remaining: 18.4s
1972: learn: 16363.6884385 test: 26654.7528015 best: 26654.7528015 (1972) total: 4.53s remaining: 18.4s
1973: learn: 16363.5000648 test: 26654.4433484 best: 26654.4433484 (1973) total: 4.53s remaining: 18.4s
1974: learn: 16363.1959607 test: 26654.1917953 best: 26654.1917953 (1974) total: 4.53s remaining: 18.4s
1975: learn: 16361.8331104 test: 26655.4771736 best: 26654.1917953 (1974) total: 4.53s remaining: 18.4s
1976: learn: 16361.4587726 test: 26654.9923897 best: 26654.1917953 (1974) total: 4.53s remaining: 18.4s
1977: learn: 16361.2489361 test: 26654.6735206 best: 26654.1917953 (1974) total: 4.54s remaining: 18.4s
1978: learn: 16358.9155800 test: 26652.9581337 best: 26652.9581337 (1978) total: 4.54s remaining: 18.4s
1979: learn: 16358.7435247 test: 26653.0891283 best: 26652.9581337 (1978) total: 4.54s remaining: 18.4s
1980: learn: 16357.6855604 test: 26652.4731457 best: 26652.4731457 (1980) total: 4.54s remaining: 18.4s
1981: learn: 16352.6008988 test: 26649.5893357 best: 26649.5893357 (1981) total: 4.55s remaining: 18.4s
1982: learn: 16348.8464827 test: 26647.4013035 best: 26647.4013035 (1982) total: 4.55s remaining: 18.4s
1983: learn: 16347.7839942 test: 26645.0909048 best: 26645.0909048 (1983) total: 4.55s remaining: 18.4s
1984: learn: 16342.7796889 test: 26644.2208629 best: 26644.2208629 (1984) total: 4.55s remaining: 18.4s
1985: learn: 16337.7540161 test: 26641.3615428 best: 26641.3615428 (1985) total: 4.55s remaining: 18.4s
1986: learn: 16337.3886928 test: 26640.8840627 best: 26640.8840627 (1986) total: 4.56s remaining: 18.4s
1987: learn: 16328.9785030 test: 26639.2095395 best: 26639.2095395 (1987) total: 4.56s remaining: 18.4s
1988: learn: 16327.6821319 test: 26639.7694178 best: 26639.2095395 (1987) total: 4.56s remaining: 18.4s
1989: learn: 16325.4226213 test: 26639.3312561 best: 26639.2095395 (1987) total: 4.56s remaining: 18.4s
1990: learn: 16325.1801223 test: 26639.4522623 best: 26639.2095395 (1987) total: 4.57s remaining: 18.4s
1991: learn: 16324.1963513 test: 26637.9510107 best: 26637.9510107 (1991) total: 4.57s remaining: 18.4s
1992: learn: 16313.8429222 test: 26638.3515228 best: 26637.9510107 (1991) total: 4.57s remaining: 18.4s
1993: learn: 16312.3220125 test: 26635.4653336 best: 26635.4653336 (1993) total: 4.57s remaining: 18.4s
1994: learn: 16310.9301035 test: 26634.4502880 best: 26634.4502880 (1994) total: 4.58s remaining: 18.4s
1995: learn: 16304.9928866 test: 26634.0632808 best: 26634.0632808 (1995) total: 4.58s remaining: 18.4s
1996: learn: 16297.4053221 test: 26632.8898057 best: 26632.8898057 (1996) total: 4.58s remaining: 18.4s
1997: learn: 16292.1022615 test: 26628.9283187 best: 26628.9283187 (1997) total: 4.58s remaining: 18.4s
1998: learn: 16284.4498651 test: 26627.1554200 best: 26627.1554200 (1998) total: 4.58s remaining: 18.4s
1999: learn: 16283.7704858 test: 26625.8422188 best: 26625.8422188 (1999) total: 4.59s remaining: 18.4s
2000: learn: 16276.4968070 test: 26628.8466661 best: 26625.8422188 (1999) total: 4.59s remaining: 18.3s
2001: learn: 16268.6018299 test: 26629.5848584 best: 26625.8422188 (1999) total: 4.59s remaining: 18.3s
2002: learn: 16260.7367715 test: 26629.8900385 best: 26625.8422188 (1999) total: 4.59s remaining: 18.3s
2003: learn: 16259.1443984 test: 26627.7838914 best: 26625.8422188 (1999) total: 4.6s remaining: 18.3s
2004: learn: 16250.6622331 test: 26624.2396451 best: 26624.2396451 (2004) total: 4.6s remaining: 18.3s
2005: learn: 16249.3374695 test: 26622.6722780 best: 26622.6722780 (2005) total: 4.6s remaining: 18.3s
2006: learn: 16246.8534563 test: 26621.8760266 best: 26621.8760266 (2006) total: 4.6s remaining: 18.3s
2007: learn: 16245.3550100 test: 26618.1120272 best: 26618.1120272 (2007) total: 4.61s remaining: 18.3s
2008: learn: 16244.1977519 test: 26615.9065577 best: 26615.9065577 (2008) total: 4.61s remaining: 18.3s
2009: learn: 16241.0211153 test: 26614.0272407 best: 26614.0272407 (2009) total: 4.61s remaining: 18.3s
2010: learn: 16235.1875295 test: 26617.2915178 best: 26614.0272407 (2009) total: 4.61s remaining: 18.3s
2011: learn: 16234.3759173 test: 26616.2515727 best: 26614.0272407 (2009) total: 4.61s remaining: 18.3s
2012: learn: 16228.9580969 test: 26619.0437025 best: 26614.0272407 (2009) total: 4.62s remaining: 18.3s
2013: learn: 16228.5734894 test: 26618.5061255 best: 26614.0272407 (2009) total: 4.62s remaining: 18.3s
2014: learn: 16226.3205851 test: 26617.2957121 best: 26614.0272407 (2009) total: 4.62s remaining: 18.3s
2015: learn: 16221.8018454 test: 26616.8796814 best: 26614.0272407 (2009) total: 4.62s remaining: 18.3s
2016: learn: 16221.3617069 test: 26617.1325722 best: 26614.0272407 (2009) total: 4.62s remaining: 18.3s
2017: learn: 16216.7748985 test: 26617.3952343 best: 26614.0272407 (2009) total: 4.63s remaining: 18.3s
2018: learn: 16215.6326547 test: 26617.0606235 best: 26614.0272407 (2009) total: 4.63s remaining: 18.3s
2019: learn: 16213.3988385 test: 26611.4218511 best: 26611.4218511 (2019) total: 4.63s remaining: 18.3s
2020: learn: 16212.4792438 test: 26609.8116817 best: 26609.8116817 (2020) total: 4.63s remaining: 18.3s
2021: learn: 16211.0669797 test: 26605.9667333 best: 26605.9667333 (2021) total: 4.63s remaining: 18.3s
2022: learn: 16200.5734610 test: 26602.6155397 best: 26602.6155397 (2022) total: 4.64s remaining: 18.3s
2023: learn: 16198.1242646 test: 26602.0661703 best: 26602.0661703 (2023) total: 4.64s remaining: 18.3s
2024: learn: 16193.8996626 test: 26601.8804930 best: 26601.8804930 (2024) total: 4.64s remaining: 18.3s
2025: learn: 16188.7531148 test: 26599.1268079 best: 26599.1268079 (2025) total: 4.65s remaining: 18.3s
2026: learn: 16187.9411034 test: 26597.5681646 best: 26597.5681646 (2026) total: 4.65s remaining: 18.3s
2027: learn: 16187.7624187 test: 26597.6052628 best: 26597.5681646 (2026) total: 4.65s remaining: 18.3s
2028: learn: 16175.7524721 test: 26592.2127981 best: 26592.2127981 (2028) total: 4.65s remaining: 18.3s
2029: learn: 16174.7853981 test: 26590.7091276 best: 26590.7091276 (2029) total: 4.66s remaining: 18.3s
2030: learn: 16167.9334776 test: 26588.3618587 best: 26588.3618587 (2030) total: 4.66s remaining: 18.3s
2031: learn: 16157.9925398 test: 26586.5466119 best: 26586.5466119 (2031) total: 4.66s remaining: 18.3s
2032: learn: 16150.7591386 test: 26585.7377040 best: 26585.7377040 (2032) total: 4.66s remaining: 18.3s
2033: learn: 16150.2616323 test: 26584.7168449 best: 26584.7168449 (2033) total: 4.67s remaining: 18.3s
2034: learn: 16149.1720256 test: 26583.3760459 best: 26583.3760459 (2034) total: 4.67s remaining: 18.3s
2035: learn: 16144.1653955 test: 26580.0994248 best: 26580.0994248 (2035) total: 4.67s remaining: 18.3s
2036: learn: 16140.4976338 test: 26578.9344593 best: 26578.9344593 (2036) total: 4.67s remaining: 18.3s
2037: learn: 16139.5287005 test: 26576.7627589 best: 26576.7627589 (2037) total: 4.67s remaining: 18.3s
2038: learn: 16131.7719659 test: 26575.3191743 best: 26575.3191743 (2038) total: 4.68s remaining: 18.3s
2039: learn: 16130.5745297 test: 26574.8590203 best: 26574.8590203 (2039) total: 4.68s remaining: 18.3s
2040: learn: 16130.0729167 test: 26574.4915784 best: 26574.4915784 (2040) total: 4.68s remaining: 18.3s
2041: learn: 16121.7067928 test: 26575.3682651 best: 26574.4915784 (2040) total: 4.68s remaining: 18.2s
2042: learn: 16120.7409487 test: 26573.2002261 best: 26573.2002261 (2042) total: 4.68s remaining: 18.2s
2043: learn: 16120.4523536 test: 26572.9709519 best: 26572.9709519 (2043) total: 4.69s remaining: 18.2s
2044: learn: 16113.3449327 test: 26572.1960613 best: 26572.1960613 (2044) total: 4.69s remaining: 18.2s
2045: learn: 16111.8097753 test: 26570.1870735 best: 26570.1870735 (2045) total: 4.69s remaining: 18.2s
2046: learn: 16110.5136161 test: 26569.2461718 best: 26569.2461718 (2046) total: 4.69s remaining: 18.2s
2047: learn: 16110.2676793 test: 26568.7783432 best: 26568.7783432 (2047) total: 4.7s remaining: 18.2s
2048: learn: 16110.0028325 test: 26568.5218560 best: 26568.5218560 (2048) total: 4.7s remaining: 18.2s
2049: learn: 16109.5984116 test: 26567.8425993 best: 26567.8425993 (2049) total: 4.7s remaining: 18.2s
2050: learn: 16108.2455165 test: 26568.5282556 best: 26567.8425993 (2049) total: 4.7s remaining: 18.2s
2051: learn: 16102.6619187 test: 26568.2635902 best: 26567.8425993 (2049) total: 4.7s remaining: 18.2s
2052: learn: 16096.6385996 test: 26568.6575975 best: 26567.8425993 (2049) total: 4.71s remaining: 18.2s
2053: learn: 16094.8413049 test: 26568.6848463 best: 26567.8425993 (2049) total: 4.71s remaining: 18.2s
2054: learn: 16092.7836323 test: 26570.0080802 best: 26567.8425993 (2049) total: 4.71s remaining: 18.2s
2055: learn: 16092.0419381 test: 26568.7446916 best: 26567.8425993 (2049) total: 4.71s remaining: 18.2s
2056: learn: 16088.1279385 test: 26566.6417355 best: 26566.6417355 (2056) total: 4.71s remaining: 18.2s
2057: learn: 16083.6687121 test: 26566.0204426 best: 26566.0204426 (2057) total: 4.72s remaining: 18.2s
2058: learn: 16082.6847140 test: 26564.2685262 best: 26564.2685262 (2058) total: 4.72s remaining: 18.2s
2059: learn: 16080.9750338 test: 26563.0083205 best: 26563.0083205 (2059) total: 4.72s remaining: 18.2s
2060: learn: 16078.7857657 test: 26560.9074875 best: 26560.9074875 (2060) total: 4.72s remaining: 18.2s
2061: learn: 16078.6190986 test: 26560.6197796 best: 26560.6197796 (2061) total: 4.72s remaining: 18.2s
2062: learn: 16074.4048195 test: 26562.2562553 best: 26560.6197796 (2061) total: 4.73s remaining: 18.2s
2063: learn: 16073.4107905 test: 26562.6294253 best: 26560.6197796 (2061) total: 4.73s remaining: 18.2s
2064: learn: 16065.7529221 test: 26562.6791594 best: 26560.6197796 (2061) total: 4.73s remaining: 18.2s
2065: learn: 16064.9877757 test: 26560.5043984 best: 26560.5043984 (2065) total: 4.73s remaining: 18.2s
2066: learn: 16061.0689362 test: 26558.3762056 best: 26558.3762056 (2066) total: 4.74s remaining: 18.2s
2067: learn: 16059.9556868 test: 26557.9723358 best: 26557.9723358 (2067) total: 4.74s remaining: 18.2s
2068: learn: 16047.3618155 test: 26551.7931213 best: 26551.7931213 (2068) total: 4.74s remaining: 18.2s
2069: learn: 16039.5733535 test: 26545.1243731 best: 26545.1243731 (2069) total: 4.74s remaining: 18.2s
2070: learn: 16027.9584286 test: 26539.9643326 best: 26539.9643326 (2070) total: 4.75s remaining: 18.2s
2071: learn: 16016.8542083 test: 26531.4790647 best: 26531.4790647 (2071) total: 4.75s remaining: 18.2s
2072: learn: 16007.9311348 test: 26528.2433904 best: 26528.2433904 (2072) total: 4.75s remaining: 18.2s
2073: learn: 16003.1824819 test: 26528.0488487 best: 26528.0488487 (2073) total: 4.75s remaining: 18.2s
2074: learn: 15995.4188823 test: 26525.0099316 best: 26525.0099316 (2074) total: 4.75s remaining: 18.2s
2075: learn: 15983.8888815 test: 26517.7968166 best: 26517.7968166 (2075) total: 4.76s remaining: 18.2s
2076: learn: 15982.6454500 test: 26518.5685546 best: 26517.7968166 (2075) total: 4.76s remaining: 18.2s
2077: learn: 15976.4186376 test: 26511.4615034 best: 26511.4615034 (2077) total: 4.76s remaining: 18.2s
2078: learn: 15975.6893937 test: 26510.2230281 best: 26510.2230281 (2078) total: 4.76s remaining: 18.2s
2079: learn: 15966.3122277 test: 26507.2079778 best: 26507.2079778 (2079) total: 4.77s remaining: 18.2s
2080: learn: 15965.9282480 test: 26507.8606388 best: 26507.2079778 (2079) total: 4.77s remaining: 18.1s
2081: learn: 15957.5789881 test: 26507.6704939 best: 26507.2079778 (2079) total: 4.77s remaining: 18.1s
2082: learn: 15956.5571873 test: 26507.7796741 best: 26507.2079778 (2079) total: 4.77s remaining: 18.1s
2083: learn: 15952.8014481 test: 26505.8224563 best: 26505.8224563 (2083) total: 4.78s remaining: 18.1s
2084: learn: 15952.5189731 test: 26505.5909285 best: 26505.5909285 (2084) total: 4.78s remaining: 18.1s
2085: learn: 15952.1197514 test: 26506.3314576 best: 26505.5909285 (2084) total: 4.78s remaining: 18.1s
2086: learn: 15950.9822856 test: 26506.5932538 best: 26505.5909285 (2084) total: 4.78s remaining: 18.1s
2087: learn: 15940.0863729 test: 26498.9323143 best: 26498.9323143 (2087) total: 4.78s remaining: 18.1s
2088: learn: 15933.0935565 test: 26500.1713426 best: 26498.9323143 (2087) total: 4.79s remaining: 18.1s
2089: learn: 15932.4859181 test: 26498.9977734 best: 26498.9323143 (2087) total: 4.79s remaining: 18.1s
2090: learn: 15931.5032713 test: 26498.4522147 best: 26498.4522147 (2090) total: 4.79s remaining: 18.1s
2091: learn: 15931.0908632 test: 26498.2753548 best: 26498.2753548 (2091) total: 4.79s remaining: 18.1s
2092: learn: 15923.1522078 test: 26501.1830894 best: 26498.2753548 (2091) total: 4.8s remaining: 18.1s
2093: learn: 15922.9082699 test: 26501.1127773 best: 26498.2753548 (2091) total: 4.8s remaining: 18.1s
2094: learn: 15914.9357977 test: 26499.4714165 best: 26498.2753548 (2091) total: 4.8s remaining: 18.1s
2095: learn: 15911.5456688 test: 26499.4381733 best: 26498.2753548 (2091) total: 4.8s remaining: 18.1s
2096: learn: 15904.5523041 test: 26494.1630933 best: 26494.1630933 (2096) total: 4.8s remaining: 18.1s
2097: learn: 15896.0809012 test: 26495.0281756 best: 26494.1630933 (2096) total: 4.81s remaining: 18.1s
2098: learn: 15895.7715164 test: 26495.4054200 best: 26494.1630933 (2096) total: 4.81s remaining: 18.1s
2099: learn: 15886.8093519 test: 26489.3287259 best: 26489.3287259 (2099) total: 4.81s remaining: 18.1s
2100: learn: 15880.2781409 test: 26489.1239354 best: 26489.1239354 (2100) total: 4.81s remaining: 18.1s
2101: learn: 15876.9964380 test: 26488.2508035 best: 26488.2508035 (2101) total: 4.82s remaining: 18.1s
2102: learn: 15876.6533276 test: 26487.9721118 best: 26487.9721118 (2102) total: 4.82s remaining: 18.1s
2103: learn: 15869.2432707 test: 26484.2627321 best: 26484.2627321 (2103) total: 4.82s remaining: 18.1s
2104: learn: 15863.0534578 test: 26483.8956326 best: 26483.8956326 (2104) total: 4.82s remaining: 18.1s
2105: learn: 15854.8804042 test: 26483.6909603 best: 26483.6909603 (2105) total: 4.83s remaining: 18.1s
2106: learn: 15853.9659204 test: 26482.3855721 best: 26482.3855721 (2106) total: 4.83s remaining: 18.1s
2107: learn: 15853.6450061 test: 26481.8920158 best: 26481.8920158 (2107) total: 4.83s remaining: 18.1s
2108: learn: 15846.4659878 test: 26481.3614941 best: 26481.3614941 (2108) total: 4.83s remaining: 18.1s
2109: learn: 15840.7890682 test: 26481.0454476 best: 26481.0454476 (2109) total: 4.83s remaining: 18.1s
2110: learn: 15838.5424416 test: 26478.2918500 best: 26478.2918500 (2110) total: 4.84s remaining: 18.1s
2111: learn: 15829.4721727 test: 26472.7922433 best: 26472.7922433 (2111) total: 4.84s remaining: 18.1s
2112: learn: 15819.9550006 test: 26475.1394471 best: 26472.7922433 (2111) total: 4.84s remaining: 18.1s
2113: learn: 15814.6003945 test: 26476.0144642 best: 26472.7922433 (2111) total: 4.84s remaining: 18.1s
2114: learn: 15814.3533231 test: 26475.7643249 best: 26472.7922433 (2111) total: 4.84s remaining: 18.1s
2115: learn: 15807.7419595 test: 26473.7594853 best: 26472.7922433 (2111) total: 4.85s remaining: 18.1s
2116: learn: 15803.3060807 test: 26471.4316905 best: 26471.4316905 (2116) total: 4.85s remaining: 18.1s
2117: learn: 15794.3462324 test: 26468.6528336 best: 26468.6528336 (2117) total: 4.85s remaining: 18.1s
2118: learn: 15794.1133930 test: 26468.4457139 best: 26468.4457139 (2118) total: 4.85s remaining: 18.1s
2119: learn: 15790.8777354 test: 26466.2242283 best: 26466.2242283 (2119) total: 4.86s remaining: 18.1s
2120: learn: 15787.0663890 test: 26466.5493194 best: 26466.2242283 (2119) total: 4.86s remaining: 18s
2121: learn: 15783.8098348 test: 26466.1380954 best: 26466.1380954 (2121) total: 4.86s remaining: 18s
2122: learn: 15783.5721997 test: 26465.9032091 best: 26465.9032091 (2122) total: 4.86s remaining: 18s
2123: learn: 15779.5328823 test: 26466.4151404 best: 26465.9032091 (2122) total: 4.87s remaining: 18s
2124: learn: 15776.4532808 test: 26466.4686583 best: 26465.9032091 (2122) total: 4.87s remaining: 18s
2125: learn: 15776.2234089 test: 26466.3913198 best: 26465.9032091 (2122) total: 4.87s remaining: 18s
2126: learn: 15774.1017227 test: 26462.2031164 best: 26462.2031164 (2126) total: 4.87s remaining: 18s
2127: learn: 15773.9269288 test: 26462.2500779 best: 26462.2031164 (2126) total: 4.87s remaining: 18s
2128: learn: 15768.3128999 test: 26463.0165319 best: 26462.2031164 (2126) total: 4.88s remaining: 18s
2129: learn: 15768.0351159 test: 26462.8857739 best: 26462.2031164 (2126) total: 4.88s remaining: 18s
2130: learn: 15759.7653458 test: 26460.0521864 best: 26460.0521864 (2130) total: 4.88s remaining: 18s
2131: learn: 15751.0326232 test: 26457.3757393 best: 26457.3757393 (2131) total: 4.88s remaining: 18s
2132: learn: 15743.8720351 test: 26454.4000600 best: 26454.4000600 (2132) total: 4.88s remaining: 18s
2133: learn: 15743.7288614 test: 26454.4801187 best: 26454.4000600 (2132) total: 4.89s remaining: 18s
2134: learn: 15733.6321856 test: 26447.0736060 best: 26447.0736060 (2134) total: 4.89s remaining: 18s
2135: learn: 15733.4086918 test: 26446.9564379 best: 26446.9564379 (2135) total: 4.89s remaining: 18s
2136: learn: 15723.9625320 test: 26447.1857764 best: 26446.9564379 (2135) total: 4.89s remaining: 18s
2137: learn: 15717.1287085 test: 26444.6116392 best: 26444.6116392 (2137) total: 4.9s remaining: 18s
2138: learn: 15716.3941803 test: 26443.6316207 best: 26443.6316207 (2138) total: 4.9s remaining: 18s
2139: learn: 15716.2294205 test: 26443.9628129 best: 26443.6316207 (2138) total: 4.9s remaining: 18s
2140: learn: 15715.2060101 test: 26443.7516597 best: 26443.6316207 (2138) total: 4.9s remaining: 18s
2141: learn: 15706.6697883 test: 26441.1402078 best: 26441.1402078 (2141) total: 4.91s remaining: 18s
2142: learn: 15706.4276847 test: 26441.2740130 best: 26441.1402078 (2141) total: 4.91s remaining: 18s
2143: learn: 15695.3594909 test: 26435.1368434 best: 26435.1368434 (2143) total: 4.91s remaining: 18s
2144: learn: 15687.0074804 test: 26432.6161773 best: 26432.6161773 (2144) total: 4.91s remaining: 18s
2145: learn: 15685.9739062 test: 26431.2378222 best: 26431.2378222 (2145) total: 4.92s remaining: 18s
2146: learn: 15678.6187345 test: 26427.4592239 best: 26427.4592239 (2146) total: 4.92s remaining: 18s
2147: learn: 15677.4739922 test: 26425.7505765 best: 26425.7505765 (2147) total: 4.92s remaining: 18s
2148: learn: 15671.3087666 test: 26424.6471996 best: 26424.6471996 (2148) total: 4.92s remaining: 18s
2149: learn: 15671.0306339 test: 26425.2964037 best: 26424.6471996 (2148) total: 4.92s remaining: 18s
2150: learn: 15662.4308163 test: 26422.0672239 best: 26422.0672239 (2150) total: 4.93s remaining: 18s
2151: learn: 15660.7544355 test: 26421.3951813 best: 26421.3951813 (2151) total: 4.93s remaining: 18s
2152: learn: 15658.1301440 test: 26417.3103589 best: 26417.3103589 (2152) total: 4.93s remaining: 18s
2153: learn: 15657.8576155 test: 26417.5909621 best: 26417.3103589 (2152) total: 4.93s remaining: 18s
2154: learn: 15648.7273712 test: 26418.4788315 best: 26417.3103589 (2152) total: 4.93s remaining: 18s
2155: learn: 15643.2030017 test: 26417.7413591 best: 26417.3103589 (2152) total: 4.94s remaining: 18s
2156: learn: 15642.9879154 test: 26417.5068639 best: 26417.3103589 (2152) total: 4.94s remaining: 18s
2157: learn: 15642.6219169 test: 26417.8606799 best: 26417.3103589 (2152) total: 4.94s remaining: 18s
2158: learn: 15642.3048613 test: 26418.5229703 best: 26417.3103589 (2152) total: 4.95s remaining: 18s
2159: learn: 15633.0966541 test: 26414.7580117 best: 26414.7580117 (2159) total: 4.95s remaining: 18s
2160: learn: 15625.7816825 test: 26411.6830293 best: 26411.6830293 (2160) total: 4.95s remaining: 18s
2161: learn: 15619.1016079 test: 26406.6311805 best: 26406.6311805 (2161) total: 4.95s remaining: 18s
2162: learn: 15618.6760117 test: 26406.3295032 best: 26406.3295032 (2162) total: 4.95s remaining: 17.9s
2163: learn: 15618.3183928 test: 26407.0137598 best: 26406.3295032 (2162) total: 4.96s remaining: 17.9s
2164: learn: 15617.9838429 test: 26407.3682082 best: 26406.3295032 (2162) total: 4.96s remaining: 17.9s
2165: learn: 15617.7655561 test: 26406.4423123 best: 26406.3295032 (2162) total: 4.96s remaining: 17.9s
2166: learn: 15617.5217219 test: 26406.0236186 best: 26406.0236186 (2166) total: 4.96s remaining: 17.9s
2167: learn: 15605.7045545 test: 26399.5303889 best: 26399.5303889 (2167) total: 4.96s remaining: 17.9s
2168: learn: 15601.1885159 test: 26398.8382231 best: 26398.8382231 (2168) total: 4.97s remaining: 17.9s
2169: learn: 15600.0431665 test: 26398.4129161 best: 26398.4129161 (2169) total: 4.97s remaining: 17.9s
2170: learn: 15599.7529331 test: 26398.6892139 best: 26398.4129161 (2169) total: 4.97s remaining: 17.9s
2171: learn: 15599.2637005 test: 26398.3685454 best: 26398.3685454 (2171) total: 4.97s remaining: 17.9s
2172: learn: 15592.3682755 test: 26396.7498817 best: 26396.7498817 (2172) total: 4.97s remaining: 17.9s
2173: learn: 15591.4338167 test: 26395.5367778 best: 26395.5367778 (2173) total: 4.98s remaining: 17.9s
2174: learn: 15589.6804694 test: 26393.4316937 best: 26393.4316937 (2174) total: 4.98s remaining: 17.9s
2175: learn: 15584.3889368 test: 26394.0364062 best: 26393.4316937 (2174) total: 4.98s remaining: 17.9s
2176: learn: 15584.1811449 test: 26393.8481331 best: 26393.4316937 (2174) total: 4.98s remaining: 17.9s
2177: learn: 15573.3412824 test: 26388.5300740 best: 26388.5300740 (2177) total: 4.99s remaining: 17.9s
2178: learn: 15566.0556023 test: 26388.3321870 best: 26388.3321870 (2178) total: 4.99s remaining: 17.9s
2179: learn: 15561.6306478 test: 26386.2320075 best: 26386.2320075 (2179) total: 4.99s remaining: 17.9s
2180: learn: 15561.3021604 test: 26384.7369397 best: 26384.7369397 (2180) total: 4.99s remaining: 17.9s
2181: learn: 15552.9555254 test: 26379.2684592 best: 26379.2684592 (2181) total: 4.99s remaining: 17.9s
2182: learn: 15552.6442703 test: 26378.9454059 best: 26378.9454059 (2182) total: 5s remaining: 17.9s
2183: learn: 15552.2989289 test: 26379.3270950 best: 26378.9454059 (2182) total: 5s remaining: 17.9s
2184: learn: 15552.0766271 test: 26379.0752550 best: 26378.9454059 (2182) total: 5s remaining: 17.9s
2185: learn: 15542.4010909 test: 26375.1121085 best: 26375.1121085 (2185) total: 5s remaining: 17.9s
2186: learn: 15541.9254321 test: 26374.8805783 best: 26374.8805783 (2186) total: 5s remaining: 17.9s
2187: learn: 15536.2680468 test: 26374.1178416 best: 26374.1178416 (2187) total: 5.01s remaining: 17.9s
2188: learn: 15534.1860695 test: 26371.8687455 best: 26371.8687455 (2188) total: 5.01s remaining: 17.9s
2189: learn: 15533.2371948 test: 26368.2429314 best: 26368.2429314 (2189) total: 5.01s remaining: 17.9s
2190: learn: 15532.9258209 test: 26368.6107344 best: 26368.2429314 (2189) total: 5.01s remaining: 17.9s
2191: learn: 15531.1811486 test: 26367.6532753 best: 26367.6532753 (2191) total: 5.02s remaining: 17.9s
2192: learn: 15530.0775471 test: 26366.0152028 best: 26366.0152028 (2192) total: 5.02s remaining: 17.9s
2193: learn: 15528.9743498 test: 26367.4254312 best: 26366.0152028 (2192) total: 5.02s remaining: 17.9s
2194: learn: 15528.5460891 test: 26367.1713607 best: 26366.0152028 (2192) total: 5.02s remaining: 17.9s
2195: learn: 15528.3382763 test: 26367.0619129 best: 26366.0152028 (2192) total: 5.02s remaining: 17.9s
2196: learn: 15521.8187496 test: 26364.3816348 best: 26364.3816348 (2196) total: 5.03s remaining: 17.9s
2197: learn: 15514.7859612 test: 26360.7831519 best: 26360.7831519 (2197) total: 5.03s remaining: 17.9s
2198: learn: 15508.1777052 test: 26362.5224589 best: 26360.7831519 (2197) total: 5.03s remaining: 17.8s
2199: learn: 15501.4064740 test: 26364.8126556 best: 26360.7831519 (2197) total: 5.03s remaining: 17.8s
2200: learn: 15491.3198968 test: 26363.3875307 best: 26360.7831519 (2197) total: 5.04s remaining: 17.8s
2201: learn: 15484.0255523 test: 26362.3369504 best: 26360.7831519 (2197) total: 5.04s remaining: 17.8s
2202: learn: 15483.7529629 test: 26362.8655069 best: 26360.7831519 (2197) total: 5.04s remaining: 17.8s
2203: learn: 15483.4205626 test: 26362.2372963 best: 26360.7831519 (2197) total: 5.04s remaining: 17.8s
2204: learn: 15474.7531509 test: 26359.5648479 best: 26359.5648479 (2204) total: 5.04s remaining: 17.8s
2205: learn: 15474.2452164 test: 26358.4532108 best: 26358.4532108 (2205) total: 5.05s remaining: 17.8s
2206: learn: 15467.9609097 test: 26356.1857899 best: 26356.1857899 (2206) total: 5.05s remaining: 17.8s
2207: learn: 15467.6533800 test: 26356.5613782 best: 26356.1857899 (2206) total: 5.05s remaining: 17.8s
2208: learn: 15467.4238152 test: 26356.9590703 best: 26356.1857899 (2206) total: 5.05s remaining: 17.8s
2209: learn: 15459.3348974 test: 26351.1587334 best: 26351.1587334 (2209) total: 5.05s remaining: 17.8s
2210: learn: 15459.0220428 test: 26351.7906010 best: 26351.1587334 (2209) total: 5.06s remaining: 17.8s
2211: learn: 15458.7619675 test: 26352.0632266 best: 26351.1587334 (2209) total: 5.06s remaining: 17.8s
2212: learn: 15452.2028508 test: 26349.2493088 best: 26349.2493088 (2212) total: 5.06s remaining: 17.8s
2213: learn: 15451.8967378 test: 26349.6271200 best: 26349.2493088 (2212) total: 5.06s remaining: 17.8s
2214: learn: 15450.7232930 test: 26349.4037939 best: 26349.2493088 (2212) total: 5.07s remaining: 17.8s
2215: learn: 15450.0700777 test: 26348.5127008 best: 26348.5127008 (2215) total: 5.07s remaining: 17.8s
2216: learn: 15445.1418278 test: 26345.5616051 best: 26345.5616051 (2216) total: 5.07s remaining: 17.8s
2217: learn: 15440.3503539 test: 26347.9194105 best: 26345.5616051 (2216) total: 5.07s remaining: 17.8s
2218: learn: 15432.2639418 test: 26344.5334942 best: 26344.5334942 (2218) total: 5.07s remaining: 17.8s
2219: learn: 15427.6405009 test: 26345.1841959 best: 26344.5334942 (2218) total: 5.08s remaining: 17.8s
2220: learn: 15417.4824570 test: 26340.0442426 best: 26340.0442426 (2220) total: 5.08s remaining: 17.8s
2221: learn: 15409.9538495 test: 26340.0213631 best: 26340.0213631 (2221) total: 5.08s remaining: 17.8s
2222: learn: 15404.0101241 test: 26337.2066332 best: 26337.2066332 (2222) total: 5.08s remaining: 17.8s
2223: learn: 15395.2938051 test: 26334.8525563 best: 26334.8525563 (2223) total: 5.08s remaining: 17.8s
2224: learn: 15394.9878001 test: 26335.1590050 best: 26334.8525563 (2223) total: 5.09s remaining: 17.8s
2225: learn: 15394.7744161 test: 26334.9525418 best: 26334.8525563 (2223) total: 5.09s remaining: 17.8s
2226: learn: 15393.5834715 test: 26332.5186498 best: 26332.5186498 (2226) total: 5.09s remaining: 17.8s
2227: learn: 15389.2119072 test: 26332.4502752 best: 26332.4502752 (2227) total: 5.09s remaining: 17.8s
2228: learn: 15383.0384590 test: 26332.0022687 best: 26332.0022687 (2228) total: 5.09s remaining: 17.8s
2229: learn: 15376.6143533 test: 26328.7297360 best: 26328.7297360 (2229) total: 5.1s remaining: 17.8s
2230: learn: 15368.8468627 test: 26328.7336613 best: 26328.7297360 (2229) total: 5.1s remaining: 17.8s
2231: learn: 15366.8844479 test: 26326.1520602 best: 26326.1520602 (2231) total: 5.1s remaining: 17.8s
2232: learn: 15365.2025665 test: 26323.7550846 best: 26323.7550846 (2232) total: 5.1s remaining: 17.8s
2233: learn: 15362.3983816 test: 26320.9541550 best: 26320.9541550 (2233) total: 5.11s remaining: 17.8s
2234: learn: 15361.3344784 test: 26321.1160197 best: 26320.9541550 (2233) total: 5.11s remaining: 17.7s
2235: learn: 15360.9394282 test: 26320.5749692 best: 26320.5749692 (2235) total: 5.11s remaining: 17.7s
2236: learn: 15360.6853304 test: 26321.2214260 best: 26320.5749692 (2235) total: 5.11s remaining: 17.7s
2237: learn: 15354.6085643 test: 26320.4506773 best: 26320.4506773 (2237) total: 5.12s remaining: 17.7s
2238: learn: 15354.2749870 test: 26320.4651693 best: 26320.4506773 (2237) total: 5.12s remaining: 17.7s
2239: learn: 15352.1316435 test: 26319.1361270 best: 26319.1361270 (2239) total: 5.12s remaining: 17.7s
2240: learn: 15350.4124869 test: 26315.3333875 best: 26315.3333875 (2240) total: 5.12s remaining: 17.7s
2241: learn: 15349.5467517 test: 26313.8491380 best: 26313.8491380 (2241) total: 5.12s remaining: 17.7s
2242: learn: 15339.4526403 test: 26309.0684079 best: 26309.0684079 (2242) total: 5.13s remaining: 17.7s
2243: learn: 15338.8184686 test: 26308.2666694 best: 26308.2666694 (2243) total: 5.13s remaining: 17.7s
2244: learn: 15333.0717102 test: 26307.3260930 best: 26307.3260930 (2244) total: 5.13s remaining: 17.7s
2245: learn: 15332.8326574 test: 26307.1911847 best: 26307.1911847 (2245) total: 5.13s remaining: 17.7s
2246: learn: 15327.5830866 test: 26307.9820691 best: 26307.1911847 (2245) total: 5.14s remaining: 17.7s
2247: learn: 15323.6617205 test: 26308.2616430 best: 26307.1911847 (2245) total: 5.14s remaining: 17.7s
2248: learn: 15316.8350518 test: 26307.4049933 best: 26307.1911847 (2245) total: 5.14s remaining: 17.7s
2249: learn: 15315.7813899 test: 26305.9144141 best: 26305.9144141 (2249) total: 5.14s remaining: 17.7s
2250: learn: 15315.3905172 test: 26304.1773687 best: 26304.1773687 (2250) total: 5.14s remaining: 17.7s
2251: learn: 15304.5489287 test: 26303.8086562 best: 26303.8086562 (2251) total: 5.15s remaining: 17.7s
2252: learn: 15298.0493477 test: 26300.7809960 best: 26300.7809960 (2252) total: 5.15s remaining: 17.7s
2253: learn: 15294.5513507 test: 26301.0229642 best: 26300.7809960 (2252) total: 5.15s remaining: 17.7s
2254: learn: 15294.2382448 test: 26300.8114133 best: 26300.7809960 (2252) total: 5.15s remaining: 17.7s
2255: learn: 15287.0747017 test: 26303.4439238 best: 26300.7809960 (2252) total: 5.16s remaining: 17.7s
2256: learn: 15284.0369034 test: 26299.9768961 best: 26299.9768961 (2256) total: 5.16s remaining: 17.7s
2257: learn: 15275.2081215 test: 26296.5967721 best: 26296.5967721 (2257) total: 5.16s remaining: 17.7s
2258: learn: 15274.1521890 test: 26293.3022576 best: 26293.3022576 (2258) total: 5.16s remaining: 17.7s
2259: learn: 15269.4633436 test: 26291.0618229 best: 26291.0618229 (2259) total: 5.17s remaining: 17.7s
2260: learn: 15263.1860799 test: 26291.1773625 best: 26291.0618229 (2259) total: 5.17s remaining: 17.7s
2261: learn: 15262.8889701 test: 26291.8474029 best: 26291.0618229 (2259) total: 5.17s remaining: 17.7s
2262: learn: 15261.4999997 test: 26289.5046811 best: 26289.5046811 (2262) total: 5.17s remaining: 17.7s
2263: learn: 15255.3205890 test: 26290.5336610 best: 26289.5046811 (2262) total: 5.17s remaining: 17.7s
2264: learn: 15253.4714498 test: 26288.7596554 best: 26288.7596554 (2264) total: 5.18s remaining: 17.7s
2265: learn: 15250.8964530 test: 26289.1404862 best: 26288.7596554 (2264) total: 5.18s remaining: 17.7s
2266: learn: 15250.7799088 test: 26289.2504275 best: 26288.7596554 (2264) total: 5.18s remaining: 17.7s
2267: learn: 15246.6294509 test: 26290.1140912 best: 26288.7596554 (2264) total: 5.18s remaining: 17.7s
2268: learn: 15245.7404424 test: 26290.5299098 best: 26288.7596554 (2264) total: 5.18s remaining: 17.7s
2269: learn: 15243.8501137 test: 26288.9085226 best: 26288.7596554 (2264) total: 5.19s remaining: 17.7s
2270: learn: 15236.9900597 test: 26287.2226995 best: 26287.2226995 (2270) total: 5.19s remaining: 17.7s
2271: learn: 15231.8064191 test: 26286.6234321 best: 26286.6234321 (2271) total: 5.19s remaining: 17.7s
2272: learn: 15227.2859317 test: 26286.5204662 best: 26286.5204662 (2272) total: 5.19s remaining: 17.7s
2273: learn: 15226.2411564 test: 26285.8140444 best: 26285.8140444 (2273) total: 5.2s remaining: 17.7s
2274: learn: 15225.0171883 test: 26285.4832698 best: 26285.4832698 (2274) total: 5.2s remaining: 17.6s
2275: learn: 15215.1039688 test: 26279.8182666 best: 26279.8182666 (2275) total: 5.2s remaining: 17.6s
2276: learn: 15205.0221539 test: 26278.4609336 best: 26278.4609336 (2276) total: 5.2s remaining: 17.6s
2277: learn: 15195.9110512 test: 26272.3469851 best: 26272.3469851 (2277) total: 5.2s remaining: 17.6s
2278: learn: 15195.6649594 test: 26272.9991222 best: 26272.3469851 (2277) total: 5.21s remaining: 17.6s
2279: learn: 15191.8299856 test: 26272.7779387 best: 26272.3469851 (2277) total: 5.21s remaining: 17.6s
2280: learn: 15191.0124169 test: 26270.9196307 best: 26270.9196307 (2280) total: 5.21s remaining: 17.6s
2281: learn: 15189.9824031 test: 26269.4816082 best: 26269.4816082 (2281) total: 5.21s remaining: 17.6s
2282: learn: 15184.7895031 test: 26270.4281658 best: 26269.4816082 (2281) total: 5.21s remaining: 17.6s
2283: learn: 15184.1686444 test: 26269.2788686 best: 26269.2788686 (2283) total: 5.22s remaining: 17.6s
2284: learn: 15177.5299670 test: 26271.2764283 best: 26269.2788686 (2283) total: 5.22s remaining: 17.6s
2285: learn: 15172.8968476 test: 26268.5583176 best: 26268.5583176 (2285) total: 5.22s remaining: 17.6s
2286: learn: 15164.3319659 test: 26269.4829859 best: 26268.5583176 (2285) total: 5.22s remaining: 17.6s
2287: learn: 15157.8430403 test: 26271.1128256 best: 26268.5583176 (2285) total: 5.23s remaining: 17.6s
2288: learn: 15154.4090478 test: 26271.5286409 best: 26268.5583176 (2285) total: 5.23s remaining: 17.6s
2289: learn: 15143.0883433 test: 26268.4463602 best: 26268.4463602 (2289) total: 5.23s remaining: 17.6s
2290: learn: 15143.0066345 test: 26268.6207498 best: 26268.4463602 (2289) total: 5.23s remaining: 17.6s
2291: learn: 15142.0412495 test: 26268.2979711 best: 26268.2979711 (2291) total: 5.23s remaining: 17.6s
2292: learn: 15135.1711552 test: 26268.0117410 best: 26268.0117410 (2292) total: 5.24s remaining: 17.6s
2293: learn: 15134.6032014 test: 26267.7698808 best: 26267.7698808 (2293) total: 5.24s remaining: 17.6s
2294: learn: 15129.0768990 test: 26266.9195543 best: 26266.9195543 (2294) total: 5.24s remaining: 17.6s
2295: learn: 15121.2754087 test: 26266.6219284 best: 26266.6219284 (2295) total: 5.24s remaining: 17.6s
2296: learn: 15120.8975563 test: 26266.4621311 best: 26266.4621311 (2296) total: 5.25s remaining: 17.6s
2297: learn: 15115.2273806 test: 26264.5481915 best: 26264.5481915 (2297) total: 5.25s remaining: 17.6s
2298: learn: 15114.8544990 test: 26262.8407424 best: 26262.8407424 (2298) total: 5.25s remaining: 17.6s
2299: learn: 15107.2084955 test: 26262.5834177 best: 26262.5834177 (2299) total: 5.25s remaining: 17.6s
2300: learn: 15106.2410474 test: 26262.4160221 best: 26262.4160221 (2300) total: 5.25s remaining: 17.6s
2301: learn: 15100.6848664 test: 26259.5152903 best: 26259.5152903 (2301) total: 5.26s remaining: 17.6s
2302: learn: 15094.4638363 test: 26257.1562777 best: 26257.1562777 (2302) total: 5.26s remaining: 17.6s
2303: learn: 15088.3862656 test: 26254.5705389 best: 26254.5705389 (2303) total: 5.26s remaining: 17.6s
2304: learn: 15083.8447543 test: 26253.1034175 best: 26253.1034175 (2304) total: 5.26s remaining: 17.6s
2305: learn: 15076.2649321 test: 26251.9416120 best: 26251.9416120 (2305) total: 5.27s remaining: 17.6s
2306: learn: 15071.2176112 test: 26253.0402936 best: 26251.9416120 (2305) total: 5.27s remaining: 17.6s
2307: learn: 15070.6552268 test: 26253.3413485 best: 26251.9416120 (2305) total: 5.27s remaining: 17.6s
2308: learn: 15062.6404973 test: 26251.1330820 best: 26251.1330820 (2308) total: 5.27s remaining: 17.6s
2309: learn: 15061.5668335 test: 26250.2611006 best: 26250.2611006 (2309) total: 5.28s remaining: 17.6s
2310: learn: 15059.6743433 test: 26247.7698851 best: 26247.7698851 (2310) total: 5.28s remaining: 17.6s
2311: learn: 15051.9053645 test: 26248.6479638 best: 26247.7698851 (2310) total: 5.28s remaining: 17.6s
2312: learn: 15044.1994260 test: 26250.3096864 best: 26247.7698851 (2310) total: 5.28s remaining: 17.6s
2313: learn: 15041.4768403 test: 26250.0446366 best: 26247.7698851 (2310) total: 5.28s remaining: 17.6s
2314: learn: 15033.5651725 test: 26245.9592849 best: 26245.9592849 (2314) total: 5.29s remaining: 17.5s
2315: learn: 15026.6013031 test: 26241.2229084 best: 26241.2229084 (2315) total: 5.29s remaining: 17.5s
2316: learn: 15016.2646733 test: 26236.3840994 best: 26236.3840994 (2316) total: 5.29s remaining: 17.5s
2317: learn: 15011.5336725 test: 26236.5037660 best: 26236.3840994 (2316) total: 5.29s remaining: 17.5s
2318: learn: 15003.9349658 test: 26234.2963386 best: 26234.2963386 (2318) total: 5.29s remaining: 17.5s
2319: learn: 15003.6653354 test: 26234.2051658 best: 26234.2051658 (2319) total: 5.3s remaining: 17.5s
2320: learn: 14995.8981251 test: 26233.3339987 best: 26233.3339987 (2320) total: 5.3s remaining: 17.5s
2321: learn: 14995.5400538 test: 26232.8160695 best: 26232.8160695 (2321) total: 5.3s remaining: 17.5s
2322: learn: 14991.8514061 test: 26232.8208310 best: 26232.8160695 (2321) total: 5.3s remaining: 17.5s
2323: learn: 14985.2375270 test: 26228.2806671 best: 26228.2806671 (2323) total: 5.3s remaining: 17.5s
2324: learn: 14984.3017506 test: 26229.1648454 best: 26228.2806671 (2323) total: 5.31s remaining: 17.5s
2325: learn: 14975.9847267 test: 26228.9211519 best: 26228.2806671 (2323) total: 5.31s remaining: 17.5s
2326: learn: 14969.8807504 test: 26227.7757837 best: 26227.7757837 (2326) total: 5.31s remaining: 17.5s
2327: learn: 14969.5938099 test: 26226.4122046 best: 26226.4122046 (2327) total: 5.31s remaining: 17.5s
2328: learn: 14964.6980442 test: 26225.2726189 best: 26225.2726189 (2328) total: 5.32s remaining: 17.5s
2329: learn: 14956.5261012 test: 26220.2127981 best: 26220.2127981 (2329) total: 5.32s remaining: 17.5s
2330: learn: 14950.3900333 test: 26216.7821994 best: 26216.7821994 (2330) total: 5.32s remaining: 17.5s
2331: learn: 14943.9796180 test: 26216.6116659 best: 26216.6116659 (2331) total: 5.32s remaining: 17.5s
2332: learn: 14939.9896088 test: 26215.7372894 best: 26215.7372894 (2332) total: 5.33s remaining: 17.5s
2333: learn: 14939.4019831 test: 26214.5382282 best: 26214.5382282 (2333) total: 5.33s remaining: 17.5s
2334: learn: 14938.8255562 test: 26213.8230056 best: 26213.8230056 (2334) total: 5.33s remaining: 17.5s
2335: learn: 14937.0241754 test: 26211.6306085 best: 26211.6306085 (2335) total: 5.33s remaining: 17.5s
2336: learn: 14931.4794366 test: 26213.2647798 best: 26211.6306085 (2335) total: 5.33s remaining: 17.5s
2337: learn: 14924.7455191 test: 26215.9217149 best: 26211.6306085 (2335) total: 5.34s remaining: 17.5s
2338: learn: 14924.5243041 test: 26216.1208127 best: 26211.6306085 (2335) total: 5.34s remaining: 17.5s
2339: learn: 14918.0307077 test: 26213.7512968 best: 26211.6306085 (2335) total: 5.34s remaining: 17.5s
2340: learn: 14910.5205197 test: 26212.9851527 best: 26211.6306085 (2335) total: 5.34s remaining: 17.5s
2341: learn: 14903.9232610 test: 26209.7038316 best: 26209.7038316 (2341) total: 5.34s remaining: 17.5s
2342: learn: 14902.7692996 test: 26209.3679034 best: 26209.3679034 (2342) total: 5.35s remaining: 17.5s
2343: learn: 14900.9249254 test: 26206.4041049 best: 26206.4041049 (2343) total: 5.35s remaining: 17.5s
2344: learn: 14900.6684138 test: 26207.0091367 best: 26206.4041049 (2343) total: 5.35s remaining: 17.5s
2345: learn: 14898.1565516 test: 26206.2561393 best: 26206.2561393 (2345) total: 5.35s remaining: 17.5s
2346: learn: 14897.8922157 test: 26206.8172816 best: 26206.2561393 (2345) total: 5.36s remaining: 17.5s
2347: learn: 14889.6270748 test: 26201.2845191 best: 26201.2845191 (2347) total: 5.36s remaining: 17.5s
2348: learn: 14879.9148285 test: 26196.5043740 best: 26196.5043740 (2348) total: 5.36s remaining: 17.5s
2349: learn: 14878.5648666 test: 26195.7525641 best: 26195.7525641 (2349) total: 5.36s remaining: 17.5s
2350: learn: 14870.3266576 test: 26195.1752973 best: 26195.1752973 (2350) total: 5.37s remaining: 17.5s
2351: learn: 14870.1145543 test: 26195.1224379 best: 26195.1224379 (2351) total: 5.37s remaining: 17.5s
2352: learn: 14864.7934934 test: 26195.7336935 best: 26195.1224379 (2351) total: 5.37s remaining: 17.5s
2353: learn: 14858.3900997 test: 26192.9305431 best: 26192.9305431 (2353) total: 5.37s remaining: 17.5s
2354: learn: 14857.9196213 test: 26191.5302786 best: 26191.5302786 (2354) total: 5.37s remaining: 17.4s
2355: learn: 14850.5243342 test: 26193.7734747 best: 26191.5302786 (2354) total: 5.38s remaining: 17.4s
2356: learn: 14843.5990971 test: 26192.5578233 best: 26191.5302786 (2354) total: 5.38s remaining: 17.4s
2357: learn: 14843.3384304 test: 26192.3346847 best: 26191.5302786 (2354) total: 5.38s remaining: 17.4s
2358: learn: 14843.1280594 test: 26192.9047728 best: 26191.5302786 (2354) total: 5.38s remaining: 17.4s
2359: learn: 14842.7235614 test: 26192.7418567 best: 26191.5302786 (2354) total: 5.38s remaining: 17.4s
2360: learn: 14838.9225527 test: 26191.6519134 best: 26191.5302786 (2354) total: 5.39s remaining: 17.4s
2361: learn: 14832.4730538 test: 26190.0373981 best: 26190.0373981 (2361) total: 5.39s remaining: 17.4s
2362: learn: 14829.5324709 test: 26189.9801851 best: 26189.9801851 (2362) total: 5.39s remaining: 17.4s
2363: learn: 14822.8405595 test: 26185.2686733 best: 26185.2686733 (2363) total: 5.39s remaining: 17.4s
2364: learn: 14822.4250848 test: 26184.7700000 best: 26184.7700000 (2364) total: 5.4s remaining: 17.4s
2365: learn: 14812.7627350 test: 26180.2779380 best: 26180.2779380 (2365) total: 5.4s remaining: 17.4s
2366: learn: 14812.4966613 test: 26180.5109725 best: 26180.2779380 (2365) total: 5.4s remaining: 17.4s
2367: learn: 14811.8899243 test: 26179.3292197 best: 26179.3292197 (2367) total: 5.4s remaining: 17.4s
2368: learn: 14806.7408883 test: 26177.7474724 best: 26177.7474724 (2368) total: 5.41s remaining: 17.4s
2369: learn: 14801.9729995 test: 26176.3609275 best: 26176.3609275 (2369) total: 5.41s remaining: 17.4s
2370: learn: 14801.8058889 test: 26175.9313610 best: 26175.9313610 (2370) total: 5.41s remaining: 17.4s
2371: learn: 14797.6401374 test: 26176.1822089 best: 26175.9313610 (2370) total: 5.41s remaining: 17.4s
2372: learn: 14794.7917314 test: 26175.7732630 best: 26175.7732630 (2372) total: 5.41s remaining: 17.4s
2373: learn: 14789.9018357 test: 26176.8690995 best: 26175.7732630 (2372) total: 5.42s remaining: 17.4s
2374: learn: 14789.6973513 test: 26177.1605745 best: 26175.7732630 (2372) total: 5.42s remaining: 17.4s
2375: learn: 14783.8045033 test: 26179.8948911 best: 26175.7732630 (2372) total: 5.42s remaining: 17.4s
2376: learn: 14780.0016981 test: 26180.5528021 best: 26175.7732630 (2372) total: 5.42s remaining: 17.4s
2377: learn: 14775.9662151 test: 26182.2765729 best: 26175.7732630 (2372) total: 5.42s remaining: 17.4s
2378: learn: 14774.9849445 test: 26181.8830103 best: 26175.7732630 (2372) total: 5.43s remaining: 17.4s
2379: learn: 14774.7441645 test: 26181.8560688 best: 26175.7732630 (2372) total: 5.43s remaining: 17.4s
2380: learn: 14769.9213721 test: 26179.3438811 best: 26175.7732630 (2372) total: 5.43s remaining: 17.4s
2381: learn: 14766.8800666 test: 26178.2128610 best: 26175.7732630 (2372) total: 5.43s remaining: 17.4s
2382: learn: 14765.4335253 test: 26176.5587210 best: 26175.7732630 (2372) total: 5.44s remaining: 17.4s
Stopped by overfitting detector (10 iterations wait)
bestTest = 26175.77326
bestIteration = 2372
Shrink model to first 2373 iterations.
###Markdown
**LGBMRegressor**
###Code
model = LGBMRegressor(random_state=42,objective='regression',learning_rate=0.1,n_estimators=443,num_leaves=32,min_child_samples=5,verbose=5,reg_alpha=0.01,reg_lambda=0.001)
model.fit(x_train,y_train)
c = model.predict(test)
sub = (a+b+c)/3
my_submission = pd.DataFrame({'Id': test_Id, 'SalePrice': sub})
my_submission.to_csv('mean_submission.csv', index=False)
###Output
_____no_output_____ |
01-intro-101/python/labs/python_intro101.2.ipynb | ###Markdown
Iteración y operaciones lógicas
###Code
# todas operaciones lógicas nos devuelven un resultado
a = 5
b = 1
#Evaluación True
print(a > b)
# evaluación False
print(a < b)
# Otros operadores lógicos '<=' - '>=' - 'not'
print(a <= b)
print(a >= b)
a = False
print(not a)
#Ejemplo con Iteración IF
a = 5
b = 5
if a > b:
print('a es mayor que b')
elif a < b:
print('a es menor que b')
else:
print('a es igual a b')
###Output
a es igual a b
###Markdown
Mediciones con tiempo
###Code
%time
###Output
CPU times: user 3 µs, sys: 0 ns, total: 3 µs
Wall time: 7.39 µs
###Markdown
Iteraciones For
###Code
# For
monsters = ['Kraken', 'Leviathan', 'Uroborus', 'Hydra']
print(monsters)
# Iterando la lista monsters
%time
for monster in monsters:
print(monster)
# función `enumerate` nos devuelve una tupla
%time
for i, monster in enumerate(monsters):
print(i, monster)
###Output
CPU times: user 3 µs, sys: 1 µs, total: 4 µs
Wall time: 5.25 µs
0 Kraken
1 Leviathan
2 Uroborus
3 Hydra
###Markdown
Iteraciones con while
###Code
# Otra manera de iterar la lista a través de while
%time
i = 0
while i < len(monsters):
print(i, monsters[i])
i += 1
# En este momento seríamos capaces de calcular la serie de Fibonacci hasta un determinado valor:
%time
n = 15000
a, b = 0, 1
while a < n:
print(a, end = " ")
a, b = b, a+b
# Podemos usar para generar listado a través de la función `range`
list(range(10))
# También podemos iterar con range en un bucle FOR
for i in range(10):
print(i, end = " ")
for i in range(5, 10):
print(i, end = " ")
for i in range(5, 100, 3):
print(i, end = " ")
###Output
5 8 11 14 17 20 23 26 29 32 35 38 41 44 47 50 53 56 59 62 65 68 71 74 77 80 83 86 89 92 95 98
###Markdown
Iterar con los diccionarios
###Code
country_codes = {
34 : 'Spain',
376 : 'Andorra',
39 : 'Italy',
33 : 'France',
424 : None
}
country_codes.
# Podemos iterar por clave
for country_code in country_codes.keys():
print(country_code)
# Podemos iterar por valores
for country_code in country_codes.values():
print(country_code)
# Podemos iterar los dos a la vez:
for country_code, country in country_codes.items():
print(country_code, country)
###Output
34 Spain
376 Andorra
39 Italy
33 France
424 None
###Markdown
Funciones```suma(x,y) = x + y```
###Code
# La función suma se define mediante la palabra especial 'def' y tiene dos argumentos: 'x' e 'y':
def suma(x, y):
return x + y
suma(5,7)
suma(10, - 5)
suma(None, 10)
suma(1.0, 5)
suma(1.5, 9)
# Podemos definir una función y darle un valor `pass` - `continue`
def dummy():
pass
dummy()
# volvemos a construir una función de Fibonacci
def fibo(n = valor):
a, b = 0, 1
while a < n:
print(a, end = " ")
a, b = b, a+b
fibo(100)
###Output
0 1 1 2 3 5 8 13 21 34 55 89
###Markdown
Lectura de ficheros
###Code
# Para importar las librerías utilizamos la función `import`
import os
# Esta función me devuelve la ruta donde este notebook está abierto
os.getcwd()
# Declaramos una variable del tipo "global" y creamos un fichero y escribimos
out = open('hola.txt', 'w')
# Vamos a escribir 10 lineas cada una con unúmero de 0 a 9 - los dígitos %d - cadena de texto %s
for i in range(11):
out.write("Línea %d%s" % (i, os.linesep))
out.close()
type(out)
# Realizamos lo mismo con la función READ
file_hola = open("hola.txt")
for line in file_hola:
print(line, end = "")
# Segunda método
file_hola = open("hola.txt")
lines = file_hola.readlines()
file_hola
print(lines)
type(lines)
for line in lines:
print(line)
# Tercero método de lectura de fichero
with open('hola.txt') as file_hola:
for line in file_hola:
print(line)
###Output
Línea 0
Línea 1
Línea 2
Línea 3
Línea 4
Línea 5
Línea 6
Línea 7
Línea 8
Línea 9
Línea 10
###Markdown
Ejercicio 3Completad las siguientes funciones y documentad el código si lo consideráis oportuno. Finalmente, escribid al menos un ejemplo de uso para cada función
###Code
# con este método from....import solo importamos lo que vamos a utilizar
from math import cos, sin, radians
# aquí estamos importando toda la librería
# import math
###Output
_____no_output_____
###Markdown
"""Función que calcula la altura en un movimiento de caída libreSuponemos que dejamos caer un objeto des de un edificio de altura desconocida.El parámetro duracion_caida nos indica el tiempo (en segundos) que tarda el objeto en llegar a la tierra. La función debería calcular la altura del edificio des del cual se ha lanzado el objeto.Podéis encontrar más información sobre el movimiento de caída libre en el siguiente enlace: https://www.fisicalab.com/apartado/caida-librecontenidos."""
###Code
def calcular_altura_caida_libre(duracion_caida):
# Definimos las variables que intervendran en la equación
velocidad_inicial = 0
acceleracion = 9.81
# Según la fórmula, podemos deducir que para calcular la altura hemos de realizar el siguiente cálculo
h = velocidad_inicial * duracion_caida + 1/2. * acceleracion * duracion_caida**2
return h
###Output
_____no_output_____
###Markdown
"""Función que calcula las coordenadas cartesianas de un punto representado en coordenadas polaresDado un punto representado por sus coordenadas polares (radio y angulo), la función debería calcular las correspondientes coordenadas cartesianas y devolver una tupla con su valor.Podéis encontrar más información sobre el sistema de coordenadas polares y su conversión al sistema cartesiano en el siguiente enlace: https://es.wikipedia.org/wiki/Coordenadas_polares."""
###Code
def calcular_coordenadas_cartesianas(radio, angulo_en_grados):
# Convertimos el ángulo a radianes
angulo_radianes = radians(angulo_en_grados)
# Calculamos las coordenadas cartesianas correspondientes
x = radio * cos(angulo_radianes)
y = radio * sin(angulo_radianes)
return x, y
# Escribid aquí al menos un ejemplo de uso utilizando las funciones anteriores. Por ejemplo:
# print "El objeto se ha dejado caer des de una altura de %f metros" % calcular_altura_caida_libre(10
print("El objeto se ha dejado caer des de una altura de %f metros" % calcular_altura_caida_libre(10))
print("El objeto se ha dejado caer des de una altura de %f metros" % calcular_altura_caida_libre(1.5))
print("Las coordenadas en el sistema cartesiano del punto (13, 23) son (%f, %f)." % calcular_coordenadas_cartesianas(13, 23))
print("Las coordenadas en el sistema cartesiano del punto (5, 90) son (%f, %f)." % calcular_coordenadas_cartesianas(5, 90))
###Output
Las coordenadas en el sistema cartesiano del punto (13, 23) son (11.966563, 5.079505).
Las coordenadas en el sistema cartesiano del punto (5, 90) son (0.000000, 5.000000).
|
PyEmotions.ipynb | ###Markdown
###Code
# Torch:
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import Dataset, DataLoader
from torch.optim import Optimizer
import torch.nn.functional as F
# Other Libraries:
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
import math
from enum import Enum
import pickle
import urllib.request
import zipfile
from google.colab import drive
drive.mount('/content/drive')
root = '/content/drive/My Drive/Colab Notebooks/Research/'
def save_obj(obj, name ):
with open(root + 'obj/' + name + '.pkl', 'wb+') as f:
pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL)
def load_obj(name ):
with open(root + 'obj/' + name + '.pkl', 'rb') as f:
return pickle.load(f)
###Output
_____no_output_____
###Markdown
Load RAVDESS Video Data
###Code
RAVDESS_DRIVE_DIR = '/content/drive/My Drive/Colab Notebooks/Research/RAVDESS/'
RAVDESS_BASE_URL = 'https://zenodo.org/record/1188976/files/'
RAVEDESS_VIDEO_SONG_FILE_NAME_TEMPLATE = 'Video_Song_Actor_'
RAVEDESS_VIDEO_SPEECH_FILE_NAME_TEMPLATE = 'Video_Speech_Actor_'
FILE_SUFFIX = '.zip'
RAVEDESS_AUDIO_FILE_NAME_SONG = 'Audio_Song_Actors_01-24.zip'
RAVEDESS_AUDIO_FILE_NAME_SPEECH = 'Audio_Speech_Actors_01-24.zip'
for i in range(1,25):
url_string = RAVDESS_BASE_URL+RAVEDESS_VIDEO_SONG_FILE_NAME_TEMPLATE+"{:02d}".format(i)+FILE_SUFFIX
print(url_string)
drive_file_string = RAVDESS_DRIVE_DIR+RAVEDESS_VIDEO_SONG_FILE_NAME_TEMPLATE+"{:02d}".format(i)+FILE_SUFFIX
try:
urllib.request.urlretrieve(url_string, drive_file_string)
print('Extracting: '+drive_file_string)
with zipfile.ZipFile(drive_file_string, 'r') as zip_ref:
zip_ref.extractall(drive_file_string.split('.')[0])
except:
print("An exception occurred")
for i in range(1,25):
drive_file_string = RAVDESS_DRIVE_DIR+'VIDEO'+"{:02d}".format(i)+FILE_SUFFIX
!unzip(drive_file_string)
###Output
_____no_output_____ |
udacity_ml/validation/validation.ipynb | ###Markdown
SkLearn cross validaiton https://scikit-learn.org/stable/modules/cross_validation.html**Training, Transforming , Predicting**PCA should be used on training data to find PCs.PCA transform is done on the training data for helping in testing .**K-Fold Cross Validation**:Provides train/test indices to split data in train/test sets. Split dataset into k consecutive folds (without shuffling by default).Each fold is then used once as a validation while the k - 1 remaining folds form the training set.**Practical Advice for K Fold**
###Code
import pickle
import sys
sys.path.append("../tools/")
from feature_format import featureFormat, targetFeatureSplit
from sklearn.model_selection import train_test_split
data_dict = pickle.load(open("../final_project/final_project_dataset.pkl", "rb") )
### first element is our labels, any added elements are predictor
### features. Keep this the same for the mini-project, but you'll
### have a different feature list when you do the final project.
features_list = ["poi", "salary"]
data = featureFormat(data_dict, features_list)
labels, features = targetFeatureSplit(data)
X_train, X_test, y_train, y_test = \
train_test_split(features, labels, test_size=0.30, random_state=42)
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(random_state=0)
#On full data
clf=clf.fit(features, labels)
score = clf.score(features, labels)
print("overfit %f"%score)
clf1 = DecisionTreeClassifier(random_state=0)
clf1=clf1.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
score = clf1.score(X_test, y_test)
print("optimized %f"%score)
from sklearn.model_selection import KFold
kf = KFold(n_splits=2)
kf.get_n_splits(data)
### it's all yours from here forward!
###Output
overfit 0.989474
optimized 0.689655
###Markdown
GridSearchCV is a way of systematically working through multiple combinations of parameter tunes, cross-validating as it goes to determine which tune gives the best performance. The beauty is that it can work through many combinations in only a couple extra lines of code.Here's an example from the sklearn documentation:```parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}svr = svm.SVC()clf = grid_search.GridSearchCV(svr, parameters)clf.fit(iris.data, iris.target)```Let's break this down line by line.```parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}```A dictionary of the parameters, and the possible values they may take. In this case, they're playing around with the kernel (possible choices are 'linear' and 'rbf'), and C (possible choices are 1 and 10).Then a 'grid' of all the following combinations of values for (kernel, C) are automatically generated:```('rbf', 1) ('rbf', 10)('linear', 1) ('linear', 10)```Each is used to train an SVM, and the performance is then assessed using cross-validation.svr = svm.SVC()This looks kind of like creating a classifier, just like we've been doing since the first lesson. But note that the "clf" isn't made until the next line--this is just saying what kind of algorithm to use. Another way to think about this is that the "classifier" isn't just the algorithm in this case, it's algorithm plus parameter values. Note that there's no monkeying around with the kernel or C; all that is handled in the next line.clf = grid_search.GridSearchCV(svr, parameters)This is where the first bit of magic happens; the classifier is being created. We pass the algorithm (svr) and the dictionary of parameters to try (parameters) and it generates a grid of parameter combinations to try.```clf.fit(iris.data, iris.target)```And the second bit of magic. The fit function now tries all the parameter combinations, and returns a fitted classifier that's automatically tuned to the optimal parameter combination. You can now access the parameter values via clf.best_params_.
###Code
from sklearn import svm, datasets
from sklearn.model_selection import GridSearchCV
iris = datasets.load_iris()
parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}
svc = svm.SVC()
clf = GridSearchCV(svc, parameters)
clf.fit(iris.data, iris.target)
GridSearchCV(estimator=svm.SVC(),
param_grid={'C': [1, 10], 'kernel': ('linear', 'rbf')})
print("Best params to use are : %s"%clf.best_params_)
###Output
Best params to use are : {'C': 1, 'kernel': 'linear'}
|
notebooks/Finding Learning Rate.ipynb | ###Markdown
Finding Learning Rate---Experiments in finding the learning rate using fastai library___ Import Library
###Code
%matplotlib inline
from fastai.vision import *
###Output
_____no_output_____
###Markdown
Load Data
###Code
path = untar_data(URLs.CIFAR)
path.ls()
! cat {path}/labels.txt
data = ImageDataBunch.from_folder(path, valid='test', ds_tfms=get_transforms(), size=32).normalize(imagenet_stats)
###Output
_____no_output_____
###Markdown
EDA
###Code
data.show_batch(3, figsize=(5,5))
data.c, data.classes
data.batch_size, data.stats
data.show_batch(rows=5, figsize=(5,5), ds_type=DatasetType.Valid)
###Output
_____no_output_____
###Markdown
EXPERIMENTS: Trying out different ways to pick learning rates--- EXP1: Training from Scratch
###Code
learner = create_cnn(data=data, arch=models.resnet34, pretrained=False, metrics=[accuracy])
lr_find(learner)
learner.recorder.plot()
###Output
Min numerical gradient: 2.75E-06
###Markdown
Exp1.A: Picking learning rates at the start, middle and end of the steepest curve
###Code
lrs = [2e-2, 2e-3, 2e-5]
###Output
_____no_output_____
###Markdown
End of Steepest Curve
###Code
learner.fit_one_cycle(5, lrs[0])
learner.recorder.plot_losses()
learner.recorder.plot_metrics()
learner.save('rn34_train-scratch_lr-end')
end_scratch_losses = learner.recorder.losses
end_scratch_val_losses = learner.recorder.val_losses
end_scratch_accuracy = learner.recorder.metrics
np.savez(path/"end_scratch_log.npz",
end_scratch_losses = end_scratch_losses,
end_scratch_val_losses = end_scratch_val_losses,
end_scratch_accuracy = end_scratch_accuracy)
###Output
_____no_output_____
###Markdown
Middle of Steepest Curve
###Code
learner = create_cnn(data=data, arch=models.resnet34, pretrained=False, metrics=[accuracy])
learner.fit_one_cycle(5, lrs[1])
learner.recorder.plot_losses()
learner.recorder.plot_metrics()
learner.save('rn34_train-scratch_lr-mid')
mid_scratch_losses = learner.recorder.losses
mid_scratch_val_losses = learner.recorder.val_losses
mid_scratch_accuracy = learner.recorder.metrics
np.savez(path/"mid_scratch_log.npz",
mid_scratch_losses = mid_scratch_losses,
mid_scratch_val_losses = mid_scratch_val_losses,
mid_scratch_accuracy = mid_scratch_accuracy)
###Output
_____no_output_____
###Markdown
Start of Steepest Curve
###Code
learner = create_cnn(data=data, arch=models.resnet34, pretrained=False, metrics=[accuracy])
learner.fit_one_cycle(5, lrs[2])
learner.recorder.plot_losses()
learner.recorder.plot_metrics()
learner.save('rn34_train-scratch_lr-start')
start_scratch_losses = learner.recorder.losses
start_scratch_val_losses = learner.recorder.val_losses
start_scratch_accuracy = learner.recorder.metrics
np.savez(path/"start_scratch_log.npz",
start_scratch_losses = start_scratch_losses,
start_scratch_val_losses = start_scratch_val_losses,
start_scratch_accuracy = start_scratch_accuracy)
scratch_iters = list(range(len(learner.recorder.losses)))
scratch_val_iters = np.cumsum(learner.recorder.nb_batches)
np.savez(path/"scratch_log.npz",
scratch_iters=scratch_iters,
scratch_val_iters=scratch_val_iters)
###Output
_____no_output_____
###Markdown
___ EXP2: Finetuning last layer
###Code
learner = create_cnn(data=data, arch=models.resnet34, metrics=[accuracy])
lr_find(learner)
learner.recorder.plot()
###Output
Min numerical gradient: 9.12E-07
###Markdown
Exp2.A: Picking learning rates at the start, middle and end of the steepest curve
###Code
lrs = [5e-2, 1e-2, 1e-3]
###Output
_____no_output_____
###Markdown
End of Steepest Curve
###Code
learner.fit_one_cycle(5, lrs[0])
learner.recorder.plot_losses()
learner.recorder.plot_metrics()
learner.save('rn34_finetune_lr-end')
end_finetune_losses = learner.recorder.losses
end_finetune_val_losses = learner.recorder.val_losses
end_finetune_accuracy = learner.recorder.metrics
np.savez(path/"end_finetune_log.npz",
end_finetune_losses = end_finetune_losses,
end_finetune_val_losses = end_finetune_val_losses,
end_finetune_accuracy = end_finetune_accuracy)
###Output
_____no_output_____
###Markdown
Middle of Steepest Curve
###Code
learner = create_cnn(data=data, arch=models.resnet34, metrics=[accuracy])
learner.fit_one_cycle(5, lrs[1])
learner.recorder.plot_losses()
learner.recorder.plot_metrics()
learner.save('rn34_finetune_lr-mid')
mid_finetune_losses = learner.recorder.losses
mid_finetune_val_losses = learner.recorder.val_losses
mid_finetune_accuracy = learner.recorder.metrics
np.savez(path/"mid_finetune_log.npz",
mid_finetune_losses = mid_finetune_losses,
mid_finetune_val_losses = mid_finetune_val_losses,
mid_finetune_accuracy = mid_finetune_accuracy)
###Output
_____no_output_____
###Markdown
Start of Steepest Curve
###Code
learner = create_cnn(data=data, arch=models.resnet34, metrics=[accuracy])
learner.fit_one_cycle(5, lrs[2])
learner.recorder.plot_losses()
learner.recorder.plot_metrics()
learner.save('rn34_finetune_lr-start')
start_finetune_losses = learner.recorder.losses
start_finetune_val_losses = learner.recorder.val_losses
start_finetune_accuracy = learner.recorder.metrics
np.savez(path/"end_finetune_log.npz",
start_finetune_losses = start_finetune_losses,
start_finetune_val_losses = start_finetune_val_losses,
start_finetune_accuracy = start_finetune_accuracy)
finetune_iters = list(range(len(learner.recorder.losses)))
finetune_val_iters = np.cumsum(learner.recorder.nb_batches)
np.savez(path/"finetune_log.npz", finetune_iters=finetune_iters, finetune_val_iters=finetune_val_iters)
###Output
_____no_output_____
###Markdown
___ EXP3: Transfer Learning **Here, we will load a finetuned model which was trained at the last layer only and apply differential learning rates to train it again**
###Code
learner = create_cnn(data=data, arch=models.resnet34, metrics=[accuracy])
learner.load('rn34_finetune_lr-mid')
learner.unfreeze()
lr_find(learner)
learner.recorder.plot()
###Output
Min numerical gradient: 6.31E-07
###Markdown
**Here, we find that the loss vs learning rate curve is very different from the earlier tasks. So, we choose values before the point where the loss started to increase rapidly.** Exp2.A: Picking learning rates at the start, middle and end of the curve before the loss shot upwards
###Code
new_lrs = [1e-4, 1e-5, 1e-6]
# We are told to use the one-tenth of the lr used for finetuning
old_lr = lrs[1]/10
###Output
_____no_output_____
###Markdown
End of Steepest Curve
###Code
learner.fit_one_cycle(5, slice(new_lrs[0], old_lr))
learner.recorder.plot_losses()
learner.recorder.plot_metrics()
learner.save('rn34_transfer_lr-end')
end_transfer_losses = learner.recorder.losses
end_transfer_val_losses = learner.recorder.val_losses
end_transfer_accuracy = learner.recorder.metrics
np.savez(path/"end_transfer_log.npz",
end_transfer_losses = end_transfer_losses,
end_transfer_val_losses = end_transfer_val_losses,
end_transfer_accuracy = end_transfer_accuracy)
###Output
_____no_output_____
###Markdown
Middle of Steepest Curve
###Code
learner = create_cnn(data=data, arch=models.resnet34, metrics=[accuracy])
learner.fit_one_cycle(5, slice(new_lrs[1], old_lr))
learner.recorder.plot_losses()
learner.recorder.plot_metrics()
learner.save('rn34_transfer_lr-mid')
mid_transfer_losses = learner.recorder.losses
mid_transfer_val_losses = learner.recorder.val_losses
mid_transfer_accuracy = learner.recorder.metrics
np.savez(path/"mid_transfer_log.npz",
mid_transfer_losses = mid_transfer_losses,
mid_transfer_val_losses = mid_transfer_val_losses,
mid_transfer_accuracy = mid_transfer_accuracy)
###Output
_____no_output_____
###Markdown
Start of Steepest Curve
###Code
learner = create_cnn(data=data, arch=models.resnet34, metrics=[accuracy])
learner.fit_one_cycle(5, slice(new_lrs[2], old_lr))
learner.recorder.plot_losses()
learner.recorder.plot_metrics()
learner.save('rn34_transfer_lr-start')
start_transfer_losses = learner.recorder.losses
start_transfer_val_losses = learner.recorder.val_losses
start_transfer_accuracy = learner.recorder.metrics
np.savez(path/"start_transfer_log.npz",
start_transfer_losses = start_transfer_losses,
start_transfer_val_losses = start_transfer_val_losses,
start_transfer_accuracy = start_transfer_accuracy)
transfer_iters = list(range(len(learner.recorder.losses)))
transfer_val_iters = np.cumsum(learner.recorder.nb_batches)
np.savez(path/"transfer_log.npz", transfer_iters=transfer_iters, transfer_val_iters=transfer_val_iters)
###Output
_____no_output_____
###Markdown
___ RESULTS: How to find a learning rate?--- For Training from Scratch
###Code
fig, ax = plt.subplots(figsize=(10,7))
ax.plot(scratch_iters, start_scratch_losses, label='Train Start lr', color='r')
ax.plot(scratch_val_iters, start_scratch_val_losses, label='Valid Start lr', color='r',linestyle='--')
ax.plot(scratch_iters, mid_scratch_losses, label='Train Mid lr', color='b')
ax.plot(scratch_val_iters, mid_scratch_val_losses, label='Valid Mid lr', color='b',linestyle='--')
ax.plot(scratch_iters, end_scratch_losses, label='Train End lr', color='g')
ax.plot(scratch_val_iters, end_scratch_val_losses, label='Valid End lr', color='g',linestyle='--')
ax.set_ylabel('Loss')
ax.set_xlabel('Batches processed')
ax.legend()
###Output
_____no_output_____
###Markdown
___ For Finetuning last layer
###Code
fig, ax = plt.subplots(figsize=(10,7))
ax.plot(finetune_iters, start_finetune_losses, label='Train Start lr', color='r')
ax.plot(finetune_val_iters, start_finetune_val_losses, label='Valid Start lr', color='r',linestyle='--')
ax.plot(finetune_iters, mid_finetune_losses, label='Train Mid lr', color='b')
ax.plot(finetune_val_iters, mid_finetune_val_losses, label='Valid Mid lr', color='b',linestyle='--')
ax.plot(finetune_iters, end_finetune_losses, label='Train End lr', color='g')
ax.plot(finetune_val_iters, end_finetune_val_losses, label='Valid End lr', color='g',linestyle='--')
ax.set_ylabel('Loss')
ax.set_xlabel('Batches processed')
ax.legend()
###Output
_____no_output_____
###Markdown
___ For Transfer Learning
###Code
fig, ax = plt.subplots(figsize=(10,7))
ax.plot(transfer_iters, start_transfer_losses, label='Train Start lr', color='r')
ax.plot(transfer_val_iters, start_transfer_val_losses, label='Valid Start lr', color='r',linestyle='--')
ax.plot(transfer_iters, mid_transfer_losses, label='Train Mid lr', color='b')
ax.plot(transfer_val_iters, mid_transfer_val_losses, label='Valid Mid lr', color='b',linestyle='--')
ax.plot(transfer_iters, end_transfer_losses, label='Train End lr', color='g')
ax.plot(transfer_val_iters, end_transfer_val_losses, label='Valid End lr', color='g',linestyle='--')
ax.set_ylabel('Loss')
ax.set_xlabel('Batches processed')
ax.legend()
###Output
_____no_output_____ |
15-Preview-of-Data-Science-Tools.ipynb | ###Markdown
*This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* A Preview of Data Science Tools If you would like to spring from here and go farther in using Python for scientific computing or data science, there are a few packages that will make your life much easier.This section will introduce and preview several of the more important ones, and give you an idea of the types of applications they are designed for.If you're using the *Anaconda* or *Miniconda* environment suggested at the beginning of this report, you can install the relevant packages with the following command:```$ conda install numpy scipy pandas matplotlib scikit-learn```Let's take a brief look at each of these in turn. NumPy: Numerical PythonNumPy provides an efficient way to store and manipulate multi-dimensional dense arrays in Python.The important features of NumPy are:- It provides an ``ndarray`` structure, which allows efficient storage and manipulation of vectors, matrices, and higher-dimensional datasets.- It provides a readable and efficient syntax for operating on this data, from simple element-wise arithmetic to more complicated linear algebraic operations.In the simplest case, NumPy arrays look a lot like Python lists.For example, here is an array containing the range of numbers 1 to 9 (compare this with Python's built-in ``range()``):
###Code
import numpy as np
x = np.arange(1, 10)
x
###Output
_____no_output_____
###Markdown
NumPy's arrays offer both efficient storage of data, as well as efficient element-wise operations on the data.For example, to square each element of the array, we can apply the "``**``" operator to the array directly:
###Code
x ** 2
###Output
_____no_output_____
###Markdown
Compare this with the much more verbose Python-style list comprehension for the same result:
###Code
[val ** 2 for val in range(1, 10)]
###Output
_____no_output_____
###Markdown
Unlike Python lists (which are limited to one dimension), NumPy arrays can be multi-dimensional.For example, here we will reshape our ``x`` array into a 3x3 array:
###Code
M = x.reshape((3, 3))
M
###Output
_____no_output_____
###Markdown
A two-dimensional array is one representation of a matrix, and NumPy knows how to efficiently do typical matrix operations. For example, you can compute the transpose using ``.T``:
###Code
M.T
###Output
_____no_output_____
###Markdown
or a matrix-vector product using ``np.dot``:
###Code
np.dot(M, [5, 6, 7])
###Output
_____no_output_____
###Markdown
and even more sophisticated operations like eigenvalue decomposition:
###Code
np.linalg.eigvals(M)
###Output
_____no_output_____
###Markdown
Such linear algebraic manipulation underpins much of modern data analysis, particularly when it comes to the fields of machine learning and data mining.For more information on NumPy, see [Resources for Further Learning](16-Further-Resources.ipynb). Pandas: Labeled Column-oriented DataPandas is a much newer package than NumPy, and is in fact built on top of it.What Pandas provides is a labeled interface to multi-dimensional data, in the form of a DataFrame object that will feel very familiar to users of R and related languages.DataFrames in Pandas look something like this:
###Code
import pandas as pd
df = pd.DataFrame({'label': ['A', 'B', 'C', 'A', 'B', 'C'],
'value': [1, 2, 3, 4, 5, 6]})
df
###Output
_____no_output_____
###Markdown
The Pandas interface allows you to do things like select columns by name:
###Code
df['label']
###Output
_____no_output_____
###Markdown
Apply string operations across string entries:
###Code
df['label'].str.lower()
###Output
_____no_output_____
###Markdown
Apply aggregates across numerical entries:
###Code
df['value'].sum()
###Output
_____no_output_____
###Markdown
And, perhaps most importantly, do efficient database-style joins and groupings:
###Code
df.groupby('label').sum()
###Output
_____no_output_____
###Markdown
Here in one line we have computed the sum of all objects sharing the same label, something that is much more verbose (and much less efficient) using tools provided in Numpy and core Python.For more information on using Pandas, see [Resources for Further Learning](16-Further-Resources.ipynb). Matplotlib MatLab-style scientific visualizationMatplotlib is currently the most popular scientific visualization packages in Python.Even proponents admit that its interface is sometimes overly verbose, but it is a powerful library for creating a large range of plots.To use Matplotlib, we can start by enabling the notebook mode (for use in the Jupyter notebook) and then importing the package as ``plt``"
###Code
# run this if using Jupyter notebook
%matplotlib notebook
import matplotlib.pyplot as plt
plt.style.use('ggplot') # make graphs in the style of R's ggplot
###Output
_____no_output_____
###Markdown
Now let's create some data (as NumPy arrays, of course) and plot the results:
###Code
x = np.linspace(0, 10) # range of values from 0 to 10
y = np.sin(x) # sine of these values
plt.plot(x, y); # plot as a line
###Output
_____no_output_____
###Markdown
If you run this code live, you will see an interactive plot that lets you pan, zoom, and scroll to explore the data.This is the simplest example of a Matplotlib plot; for ideas on the wide range of plot types available, see [Matplotlib's online gallery](http://matplotlib.org/gallery.html) as well as other references listed in [Resources for Further Learning](16-Further-Resources.ipynb). SciPy: Scientific PythonSciPy is a collection of scientific functionality that is built on NumPy.The package began as a set of Python wrappers to well-known Fortran libraries for numerical computing, and has grown from there.The package is arranged as a set of submodules, each implementing some class of numerical algorithms.Here is an incomplete sample of some of the more important ones for data science:- ``scipy.fftpack``: Fast Fourier transforms- ``scipy.integrate``: Numerical integration- ``scipy.interpolate``: Numerical interpolation- ``scipy.linalg``: Linear algebra routines- ``scipy.optimize``: Numerical optimization of functions- ``scipy.sparse``: Sparse matrix storage and linear algebra- ``scipy.stats``: Statistical analysis routinesFor example, let's take a look at interpolating a smooth curve between some data
###Code
from scipy import interpolate
# choose eight points between 0 and 10
x = np.linspace(0, 10, 8)
y = np.sin(x)
# create a cubic interpolation function
func = interpolate.interp1d(x, y, kind='cubic')
# interpolate on a grid of 1,000 points
x_interp = np.linspace(0, 10, 1000)
y_interp = func(x_interp)
# plot the results
plt.figure() # new figure
plt.plot(x, y, 'o')
plt.plot(x_interp, y_interp);
###Output
_____no_output_____
###Markdown
*This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* A Preview of Data Science Tools If you would like to spring from here and go farther in using Python for scientific computing or data science, there are a few packages that will make your life much easier.This section will introduce and preview several of the more important ones, and give you an idea of the types of applications they are designed for.If you're using the *Anaconda* or *Miniconda* environment suggested at the beginning of this report, you can install the relevant packages with the following command:```$ conda install numpy scipy pandas matplotlib scikit-learn```Let's take a brief look at each of these in turn. NumPy: Numerical PythonNumPy provides an efficient way to store and manipulate multi-dimensional dense arrays in Python.The important features of NumPy are:- It provides an ``ndarray`` structure, which allows efficient storage and manipulation of vectors, matrices, and higher-dimensional datasets.- It provides a readable and efficient syntax for operating on this data, from simple element-wise arithmetic to more complicated linear algebraic operations.In the simplest case, NumPy arrays look a lot like Python lists.For example, here is an array containing the range of numbers 1 to 9 (compare this with Python's built-in ``range()``):
###Code
import numpy as np
x = np.arange(1, 10)
x
###Output
_____no_output_____
###Markdown
NumPy's arrays offer both efficient storage of data, as well as efficient element-wise operations on the data.For example, to square each element of the array, we can apply the "``**``" operator to the array directly:
###Code
x ** 2
###Output
_____no_output_____
###Markdown
Compare this with the much more verbose Python-style list comprehension for the same result:
###Code
[val ** 2 for val in range(1, 10)]
###Output
_____no_output_____
###Markdown
Unlike Python lists (which are limited to one dimension), NumPy arrays can be multi-dimensional.For example, here we will reshape our ``x`` array into a 3x3 array:
###Code
M = x.reshape((3, 3))
M
###Output
_____no_output_____
###Markdown
A two-dimensional array is one representation of a matrix, and NumPy knows how to efficiently do typical matrix operations. For example, you can compute the transpose using ``.T``:
###Code
M.T
###Output
_____no_output_____
###Markdown
or a matrix-vector product using ``np.dot``:
###Code
np.dot(M, [5, 6, 7])
###Output
_____no_output_____
###Markdown
and even more sophisticated operations like eigenvalue decomposition:
###Code
np.linalg.eigvals(M)
###Output
_____no_output_____
###Markdown
Such linear algebraic manipulation underpins much of modern data analysis, particularly when it comes to the fields of machine learning and data mining.For more information on NumPy, see [Resources for Further Learning](16-Further-Resources.ipynb). Pandas: Labeled Column-oriented DataPandas is a much newer package than NumPy, and is in fact built on top of it.What Pandas provides is a labeled interface to multi-dimensional data, in the form of a DataFrame object that will feel very familiar to users of R and related languages.DataFrames in Pandas look something like this:
###Code
import pandas as pd
df = pd.DataFrame({'label': ['A', 'B', 'C', 'A', 'B', 'C'],
'value': [1, 2, 3, 4, 5, 6]})
df
###Output
_____no_output_____
###Markdown
The Pandas interface allows you to do things like select columns by name:
###Code
df['label']
###Output
_____no_output_____
###Markdown
Apply string operations across string entries:
###Code
df['label'].str.lower()
###Output
_____no_output_____
###Markdown
Apply aggregates across numerical entries:
###Code
df['value'].sum()
###Output
_____no_output_____
###Markdown
And, perhaps most importantly, do efficient database-style joins and groupings:
###Code
df.groupby('label').sum()
###Output
_____no_output_____
###Markdown
Here in one line we have computed the sum of all objects sharing the same label, something that is much more verbose (and much less efficient) using tools provided in Numpy and core Python.For more information on using Pandas, see [Resources for Further Learning](16-Further-Resources.ipynb). Matplotlib MatLab-style scientific visualizationMatplotlib is currently the most popular scientific visualization packages in Python.Even proponents admit that its interface is sometimes overly verbose, but it is a powerful library for creating a large range of plots.To use Matplotlib, we can start by enabling the notebook mode (for use in the Jupyter notebook) and then importing the package as ``plt``"
###Code
# run this if using Jupyter notebook
%matplotlib notebook
import matplotlib.pyplot as plt
plt.style.use('ggplot') # make graphs in the style of R's ggplot
###Output
_____no_output_____
###Markdown
Now let's create some data (as NumPy arrays, of course) and plot the results:
###Code
x = np.linspace(0, 10) # range of values from 0 to 10
y = np.sin(x) # sine of these values
plt.plot(x, y); # plot as a line
###Output
_____no_output_____
###Markdown
If you run this code live, you will see an interactive plot that lets you pan, zoom, and scroll to explore the data.This is the simplest example of a Matplotlib plot; for ideas on the wide range of plot types available, see [Matplotlib's online gallery](http://matplotlib.org/gallery.html) as well as other references listed in [Resources for Further Learning](16-Further-Resources.ipynb). SciPy: Scientific PythonSciPy is a collection of scientific functionality that is built on NumPy.The package began as a set of Python wrappers to well-known Fortran libraries for numerical computing, and has grown from there.The package is arranged as a set of submodules, each implementing some class of numerical algorithms.Here is an incomplete sample of some of the more important ones for data science:- ``scipy.fftpack``: Fast Fourier transforms- ``scipy.integrate``: Numerical integration- ``scipy.interpolate``: Numerical interpolation- ``scipy.linalg``: Linear algebra routines- ``scipy.optimize``: Numerical optimization of functions- ``scipy.sparse``: Sparse matrix storage and linear algebra- ``scipy.stats``: Statistical analysis routinesFor example, let's take a look at interpolating a smooth curve between some data
###Code
from scipy import interpolate
# choose eight points between 0 and 10
x = np.linspace(0, 10, 8)
y = np.sin(x)
# create a cubic interpolation function
func = interpolate.interp1d(x, y, kind='cubic')
# interpolate on a grid of 1,000 points
x_interp = np.linspace(0, 10, 1000)
y_interp = func(x_interp)
# plot the results
plt.figure() # new figure
plt.plot(x, y, 'o')
plt.plot(x_interp, y_interp);
###Output
_____no_output_____
###Markdown
*Este notebook es una adaptación realizada por J. Rafael Rodríguez Galván del material "[Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp)" de Jake VanderPlas; tanto el [contenido original](https://github.com/jakevdp/WhirlwindTourOfPython) como la [adpatación actual](https://github.com/rrgalvan/PythonIntroMasterMatemat)] están disponibles en Github.**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* A Preview of Data Science Tools If you would like to spring from here and go farther in using Python for scientific computing or data science, there are a few packages that will make your life much easier.This section will introduce and preview several of the more important ones, and give you an idea of the types of applications they are designed for.If you're using the *Anaconda* or *Miniconda* environment suggested at the beginning of this report, you can install the relevant packages with the following command:```$ conda install numpy scipy pandas matplotlib scikit-learn```Let's take a brief look at each of these in turn. NumPy: Numerical PythonNumPy provides an efficient way to store and manipulate multi-dimensional dense arrays in Python.The important features of NumPy are:- It provides an ``ndarray`` structure, which allows efficient storage and manipulation of vectors, matrices, and higher-dimensional datasets.- It provides a readable and efficient syntax for operating on this data, from simple element-wise arithmetic to more complicated linear algebraic operations.In the simplest case, NumPy arrays look a lot like Python lists.For example, here is an array containing the range of numbers 1 to 9 (compare this with Python's built-in ``range()``):
###Code
import numpy as np
x = np.arange(1, 10)
x
###Output
_____no_output_____
###Markdown
NumPy's arrays offer both efficient storage of data, as well as efficient element-wise operations on the data.For example, to square each element of the array, we can apply the "``**``" operator to the array directly:
###Code
x ** 2
###Output
_____no_output_____
###Markdown
Compare this with the much more verbose Python-style list comprehension for the same result:
###Code
[val ** 2 for val in range(1, 10)]
###Output
_____no_output_____
###Markdown
Unlike Python lists (which are limited to one dimension), NumPy arrays can be multi-dimensional.For example, here we will reshape our ``x`` array into a 3x3 array:
###Code
M = x.reshape((3, 3))
M
###Output
_____no_output_____
###Markdown
A two-dimensional array is one representation of a matrix, and NumPy knows how to efficiently do typical matrix operations. For example, you can compute the transpose using ``.T``:
###Code
M.T
###Output
_____no_output_____
###Markdown
or a matrix-vector product using ``np.dot``:
###Code
np.dot(M, [5, 6, 7])
###Output
_____no_output_____
###Markdown
and even more sophisticated operations like eigenvalue decomposition:
###Code
np.linalg.eigvals(M)
###Output
_____no_output_____
###Markdown
Such linear algebraic manipulation underpins much of modern data analysis, particularly when it comes to the fields of machine learning and data mining.For more information on NumPy, see [Resources for Further Learning](16-Further-Resources.ipynb). Pandas: Labeled Column-oriented DataPandas is a much newer package than NumPy, and is in fact built on top of it.What Pandas provides is a labeled interface to multi-dimensional data, in the form of a DataFrame object that will feel very familiar to users of R and related languages.DataFrames in Pandas look something like this:
###Code
import pandas as pd
df = pd.DataFrame({'label': ['A', 'B', 'C', 'A', 'B', 'C'],
'value': [1, 2, 3, 4, 5, 6]})
df
###Output
_____no_output_____
###Markdown
The Pandas interface allows you to do things like select columns by name:
###Code
df['label']
###Output
_____no_output_____
###Markdown
Apply string operations across string entries:
###Code
df['label'].str.lower()
###Output
_____no_output_____
###Markdown
Apply aggregates across numerical entries:
###Code
df['value'].sum()
###Output
_____no_output_____
###Markdown
And, perhaps most importantly, do efficient database-style joins and groupings:
###Code
df.groupby('label').sum()
###Output
_____no_output_____
###Markdown
Here in one line we have computed the sum of all objects sharing the same label, something that is much more verbose (and much less efficient) using tools provided in Numpy and core Python.For more information on using Pandas, see [Resources for Further Learning](16-Further-Resources.ipynb). Matplotlib MatLab-style scientific visualizationMatplotlib is currently the most popular scientific visualization packages in Python.Even proponents admit that its interface is sometimes overly verbose, but it is a powerful library for creating a large range of plots.To use Matplotlib, we can start by enabling the notebook mode (for use in the Jupyter notebook) and then importing the package as ``plt``"
###Code
# run this if using Jupyter notebook
%matplotlib notebook
import matplotlib.pyplot as plt
plt.style.use('ggplot') # make graphs in the style of R's ggplot
###Output
_____no_output_____
###Markdown
Now let's create some data (as NumPy arrays, of course) and plot the results:
###Code
x = np.linspace(0, 10) # range of values from 0 to 10
y = np.sin(x) # sine of these values
plt.plot(x, y); # plot as a line
###Output
_____no_output_____
###Markdown
If you run this code live, you will see an interactive plot that lets you pan, zoom, and scroll to explore the data.This is the simplest example of a Matplotlib plot; for ideas on the wide range of plot types available, see [Matplotlib's online gallery](http://matplotlib.org/gallery.html) as well as other references listed in [Resources for Further Learning](16-Further-Resources.ipynb). SciPy: Scientific PythonSciPy is a collection of scientific functionality that is built on NumPy.The package began as a set of Python wrappers to well-known Fortran libraries for numerical computing, and has grown from there.The package is arranged as a set of submodules, each implementing some class of numerical algorithms.Here is an incomplete sample of some of the more important ones for data science:- ``scipy.fftpack``: Fast Fourier transforms- ``scipy.integrate``: Numerical integration- ``scipy.interpolate``: Numerical interpolation- ``scipy.linalg``: Linear algebra routines- ``scipy.optimize``: Numerical optimization of functions- ``scipy.sparse``: Sparse matrix storage and linear algebra- ``scipy.stats``: Statistical analysis routinesFor example, let's take a look at interpolating a smooth curve between some data
###Code
from scipy import interpolate
# choose eight points between 0 and 10
x = np.linspace(0, 10, 8)
y = np.sin(x)
# create a cubic interpolation function
func = interpolate.interp1d(x, y, kind='cubic')
# interpolate on a grid of 1,000 points
x_interp = np.linspace(0, 10, 1000)
y_interp = func(x_interp)
# plot the results
plt.figure() # new figure
plt.plot(x, y, 'o')
plt.plot(x_interp, y_interp);
###Output
_____no_output_____
###Markdown
A Preview of Data Science Tools 数据科学工具预览 > If you would like to spring from here and go farther in using Python for scientific computing or data science, there are a few packages that will make your life much easier.This section will introduce and preview several of the more important ones, and give you an idea of the types of applications they are designed for.If you're using the *Anaconda* or *Miniconda* environment suggested at the beginning of this report, you can install the relevant packages with the following command:```$ conda install numpy scipy pandas matplotlib scikit-learn```> Let's take a brief look at each of these in turn.如果你希望开始使用Python进入科学计算和数据科学的领域,那么有一些第三方包会让你的工作更加轻松。本章将会介绍和预览其中非常重要的几个,通过讨论,你能对他们的应用场景有所了解。如果你在使用*Anaconda*或*Miniconda*环境,你可以在开始之前先安装相关的第三方包:```shell$ conda install numpy scipy pandas matplotlib scikit-learn```下面我们逐个简单的介绍一下它们。 NumPy: Numerical Python Numpy:Numerical Python> NumPy provides an efficient way to store and manipulate multi-dimensional dense arrays in Python.The important features of NumPy are:> - It provides an ``ndarray`` structure, which allows efficient storage and manipulation of vectors, matrices, and higher-dimensional datasets.> - It provides a readable and efficient syntax for operating on this data, from simple element-wise arithmetic to more complicated linear algebraic operations.Numpy为Python提供了一个有效的方式来存储和操作多维非稀疏数组。他的重要特性包括:- 提供`ndarray`结构,能够极为高效的存储和操作向量、矩阵和张量。- 提供可读性高和简洁的语法来操作这些数据,从简单的元素算术运算到负责的线性代数运算。> In the simplest case, NumPy arrays look a lot like Python lists.For example, here is an array containing the range of numbers 1 to 9 (compare this with Python's built-in ``range()``):在简单的情况下,Numpy的数组很像Python的列表。例如,下面例子是一个数组包括数值范围1到9(可以与Python內建的`range()`比较):
###Code
import numpy as np
x = np.arange(1, 10)
x
###Output
_____no_output_____
###Markdown
> NumPy's arrays offer both efficient storage of data, as well as efficient element-wise operations on the data.For example, to square each element of the array, we can apply the "``**``" operator to the array directly:Numpy的数组提供了数据的高效存储和元素操作。例如,对每个元素求平方值,只需要简单的将`**`运算符应用到数组上即可。
###Code
x ** 2
###Output
_____no_output_____
###Markdown
> Compare this with the much more verbose Python-style list comprehension for the same result:如果和Python的列表解析进行比较,结果相同:
###Code
[val ** 2 for val in range(1, 10)]
###Output
_____no_output_____
###Markdown
> Unlike Python lists (which are limited to one dimension), NumPy arrays can be multi-dimensional.For example, here we will reshape our ``x`` array into a 3x3 array:与Python的列表只能是一维的不同,Numpy的数组可以是多维的。例如,我们可以将`x`数组变形为3x3的数组:
###Code
M = x.reshape((3, 3))
M
###Output
_____no_output_____
###Markdown
> A two-dimensional array is one representation of a matrix, and NumPy knows how to efficiently do typical matrix operations. For example, you can compute the transpose using ``.T``:二维数组其实就是一个矩阵,Numpy有很多高效的矩阵运算操作。例如,你可以使用`numpy.T`计算出矩阵的倒置:
###Code
M.T
###Output
_____no_output_____
###Markdown
> or a matrix-vector product using ``np.dot``:或者得到一个矩阵向量的点积,使用`numpy.dot`:
###Code
np.dot(M, [5, 6, 7])
###Output
_____no_output_____
###Markdown
> and even more sophisticated operations like eigenvalue decomposition:还有更加复杂的操作如特征值分解:
###Code
np.linalg.eigvals(M)
###Output
_____no_output_____
###Markdown
> Such linear algebraic manipulation underpins much of modern data analysis, particularly when it comes to the fields of machine learning and data mining.这样的线性代数运算是大多数现代数据分析的基础,特别是当你使用机器学习和数据挖掘的时候。> For more information on NumPy, see [Resources for Further Learning](16-Further-Resources.ipynb).要获得更多有关Numpy的资源,参见[后续学习资源](16-Further-Resources.ipynb)。 Pandas: Labeled Column-oriented Data Pandas:标签化的基于列的数据> Pandas is a much newer package than NumPy, and is in fact built on top of it.What Pandas provides is a labeled interface to multi-dimensional data, in the form of a DataFrame object that will feel very familiar to users of R and related languages.DataFrames in Pandas look something like this:Pandas较Numpy而言是一个比较新的包,实际上Pandas是以Numpy为基础构建的。Pandas提供的是一个多维的标签化的数据接口,抽象出来的DataFrame对象对于其他使用R或者相关语言的用户来说会非常熟悉。Pandas中的DataFrame就像这样:
###Code
import pandas as pd
df = pd.DataFrame({'label': ['A', 'B', 'C', 'A', 'B', 'C'],
'value': [1, 2, 3, 4, 5, 6]})
df
###Output
_____no_output_____
###Markdown
> The Pandas interface allows you to do things like select columns by name:然后Pandas允许使用列名来选择数据:
###Code
df['label']
###Output
_____no_output_____
###Markdown
> Apply string operations across string entries:对字符串的列使用字符串操作:
###Code
df['label'].str.lower()
###Output
_____no_output_____
###Markdown
> Apply aggregates across numerical entries:对数值型的列使用聚合操作:
###Code
df['value'].sum()
###Output
_____no_output_____
###Markdown
> And, perhaps most importantly, do efficient database-style joins and groupings:还有最重要的是,对数据库风格的连接和分组进行有效的计算:
###Code
df.groupby('label').sum()
###Output
_____no_output_____
###Markdown
> Here in one line we have computed the sum of all objects sharing the same label, something that is much more verbose (and much less efficient) using tools provided in Numpy and core Python.我们只用了一行代码就计算了所有相同标签`label`的总和,这在Numpy和Python当中都没有这么简明并且没有这么高效。> For more information on using Pandas, see [Resources for Further Learning](16-Further-Resources.ipynb).要获得更多有关Pandas的资源,参见[后续学习资源](16-Further-Resources.ipynb)。 Matplotlib MatLab-style scientific visualization Matplotlib:MatLab风格的科学图表> Matplotlib is currently the most popular scientific visualization packages in Python.Even proponents admit that its interface is sometimes overly verbose, but it is a powerful library for creating a large range of plots.Matplotlib是目前Python中最流行的科学图表展示包。甚至它的支持者都认为它的接口有时候太冗长了,但是它依然是一个创建图表展示的强大工具库。> To use Matplotlib, we can start by enabling the notebook mode (for use in the Jupyter notebook) and then importing the package as ``plt``"要在Jupyter notebook里面使用Matplotlib,你可以激活它的notebook模式,然后将这个模块载入:
###Code
# 在Jupyter notebook下面激活Matplotlib的notebook模式
%matplotlib notebook
import matplotlib.pyplot as plt
plt.style.use('ggplot') # 将图表的风格设置为R语言中的ggplot风格
###Output
_____no_output_____
###Markdown
> Now let's create some data (as NumPy arrays, of course) and plot the results:现在我们创建一些数据(当然是使用Numpy数组)然后展示图表结果:
###Code
x = np.linspace(0, 10) # x为均分0-10的区间
y = np.sin(x) # y为x的正弦值
plt.plot(x, y); # 以线条展示图形
###Output
_____no_output_____
###Markdown
> If you run this code live, you will see an interactive plot that lets you pan, zoom, and scroll to explore the data.如果你执行这个代码,你会看到一个交互式的图表展示在你的屏幕上,这个图表允许你平移、缩放、滚动来查看数据。> This is the simplest example of a Matplotlib plot; for ideas on the wide range of plot types available, see [Matplotlib's online gallery](http://matplotlib.org/gallery.html) as well as other references listed in [Resources for Further Learning](16-Further-Resources.ipynb).这是Matplotlib图表的一个最最简单的例子;如果你想知道哪些图表类型可用,参见[Matplotlib在线图库](http://matplotlib.org/gallery.html) 还有其他的很多资源可以参看[后续学习资源](16-Further-Resources.ipynb)。 SciPy: Scientific Python SciPy:Python科学库> SciPy is a collection of scientific functionality that is built on NumPy.The package began as a set of Python wrappers to well-known Fortran libraries for numerical computing, and has grown from there.The package is arranged as a set of submodules, each implementing some class of numerical algorithms.Here is an incomplete sample of some of the more important ones for data science:> - ``scipy.fftpack``: Fast Fourier transforms> - ``scipy.integrate``: Numerical integration> - ``scipy.interpolate``: Numerical interpolation> - ``scipy.linalg``: Linear algebra routines> - ``scipy.optimize``: Numerical optimization of functions> - ``scipy.sparse``: Sparse matrix storage and linear algebra> - ``scipy.stats``: Statistical analysis routinesSciPy是一整套科学的功能库,它同样也构建在Numpy之上。这个包首先封装了很多有名的Fortran数值计算库,并且进行了很多扩展。SciPy有很多的子模块,每个子模块实现一类数值算法。下面列出部分重要的子模块机器描述:- ``scipy.fftpack``: 快速傅里叶变换- ``scipy.integrate``: 数值积分- ``scipy.interpolate``: 数值插补- ``scipy.linalg``: 线性代数- ``scipy.optimize``: 函数的数值优化- ``scipy.sparse``: 稀疏矩阵存储和线性代数- ``scipy.stats``: 统计分析> For example, let's take a look at interpolating a smooth curve between some data例如,我们看看在数据之中进行插补形成光滑的曲线:
###Code
from scipy import interpolate
# x为均分0-10区间,共8个点
x = np.linspace(0, 10, 8)
y = np.sin(x) # y为x的正弦值
# 构建立方插值函数
func = interpolate.interp1d(x, y, kind='cubic')
# 在0-10区间插入1000个点
x_interp = np.linspace(0, 10, 1000)
y_interp = func(x_interp)
# 在Matplotlib上描绘图表
plt.figure() # 创建新图表
plt.plot(x, y, 'o') # 原始值用圆点表示
plt.plot(x_interp, y_interp); # 绘制插补的值
###Output
_____no_output_____
###Markdown
*This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* A Preview of Data Science Tools If you would like to spring from here and go farther in using Python for scientific computing or data science, there are a few packages that will make your life much easier.This section will introduce and preview several of the more important ones, and give you an idea of the types of applications they are designed for.If you're using the *Anaconda* or *Miniconda* environment suggested at the beginning of this report, you can install the relevant packages with the following command:```$ conda install numpy scipy pandas matplotlib scikit-learn```Let's take a brief look at each of these in turn. NumPy: Numerical PythonNumPy provides an efficient way to store and manipulate multi-dimensional dense arrays in Python.The important features of NumPy are:- It provides an ``ndarray`` structure, which allows efficient storage and manipulation of vectors, matrices, and higher-dimensional datasets.- It provides a readable and efficient syntax for operating on this data, from simple element-wise arithmetic to more complicated linear algebraic operations.In the simplest case, NumPy arrays look a lot like Python lists.For example, here is an array containing the range of numbers 1 to 9 (compare this with Python's built-in ``range()``):
###Code
import numpy as np
x = np.arange(1, 10)
x
###Output
_____no_output_____
###Markdown
NumPy's arrays offer both efficient storage of data, as well as efficient element-wise operations on the data.For example, to square each element of the array, we can apply the "``**``" operator to the array directly:
###Code
x ** 2
###Output
_____no_output_____
###Markdown
Compare this with the much more verbose Python-style list comprehension for the same result:
###Code
[val ** 2 for val in range(1, 10)]
###Output
_____no_output_____
###Markdown
Unlike Python lists (which are limited to one dimension), NumPy arrays can be multi-dimensional.For example, here we will reshape our ``x`` array into a 3x3 array:
###Code
M = x.reshape((3, 3))
M
###Output
_____no_output_____
###Markdown
A two-dimensional array is one representation of a matrix, and NumPy knows how to efficiently do typical matrix operations. For example, you can compute the transpose using ``.T``:
###Code
M.T
###Output
_____no_output_____
###Markdown
or a matrix-vector product using ``np.dot``:
###Code
np.dot(M, [5, 6, 7])
###Output
_____no_output_____
###Markdown
and even more sophisticated operations like eigenvalue decomposition:
###Code
np.linalg.eigvals(M)
###Output
_____no_output_____
###Markdown
Such linear algebraic manipulation underpins much of modern data analysis, particularly when it comes to the fields of machine learning and data mining.For more information on NumPy, see [Resources for Further Learning](16-Further-Resources.ipynb). Pandas: Labeled Column-oriented DataPandas is a much newer package than NumPy, and is in fact built on top of it.What Pandas provides is a labeled interface to multi-dimensional data, in the form of a DataFrame object that will feel very familiar to users of R and related languages.DataFrames in Pandas look something like this:
###Code
import pandas as pd
df = pd.DataFrame({'label': ['A', 'B', 'C', 'A', 'B', 'C'],
'value': [1, 2, 3, 4, 5, 6]})
df
###Output
_____no_output_____
###Markdown
The Pandas interface allows you to do things like select columns by name:
###Code
df['label']
###Output
_____no_output_____
###Markdown
Apply string operations across string entries:
###Code
df['label'].str.lower()
###Output
_____no_output_____
###Markdown
Apply aggregates across numerical entries:
###Code
df['value'].sum()
###Output
_____no_output_____
###Markdown
And, perhaps most importantly, do efficient database-style joins and groupings:
###Code
df.groupby('label').sum()
###Output
_____no_output_____
###Markdown
Here in one line we have computed the sum of all objects sharing the same label, something that is much more verbose (and much less efficient) using tools provided in Numpy and core Python.For more information on using Pandas, see [Resources for Further Learning](16-Further-Resources.ipynb). Matplotlib MatLab-style scientific visualizationMatplotlib is currently the most popular scientific visualization packages in Python.Even proponents admit that its interface is sometimes overly verbose, but it is a powerful library for creating a large range of plots.To use Matplotlib, we can start by enabling the notebook mode (for use in the Jupyter notebook) and then importing the package as ``plt``"
###Code
# run this if using Jupyter notebook
%matplotlib notebook
import matplotlib.pyplot as plt
plt.style.use('ggplot') # make graphs in the style of R's ggplot
###Output
_____no_output_____
###Markdown
Now let's create some data (as NumPy arrays, of course) and plot the results:
###Code
x = np.linspace(0, 10) # range of values from 0 to 10
y = np.sin(x) # sine of these values
plt.plot(x, y); # plot as a line
###Output
_____no_output_____
###Markdown
If you run this code live, you will see an interactive plot that lets you pan, zoom, and scroll to explore the data.This is the simplest example of a Matplotlib plot; for ideas on the wide range of plot types available, see [Matplotlib's online gallery](http://matplotlib.org/gallery.html) as well as other references listed in [Resources for Further Learning](16-Further-Resources.ipynb). SciPy: Scientific PythonSciPy is a collection of scientific functionality that is built on NumPy.The package began as a set of Python wrappers to well-known Fortran libraries for numerical computing, and has grown from there.The package is arranged as a set of submodules, each implementing some class of numerical algorithms.Here is an incomplete sample of some of the more important ones for data science:- ``scipy.fftpack``: Fast Fourier transforms- ``scipy.integrate``: Numerical integration- ``scipy.interpolate``: Numerical interpolation- ``scipy.linalg``: Linear algebra routines- ``scipy.optimize``: Numerical optimization of functions- ``scipy.sparse``: Sparse matrix storage and linear algebra- ``scipy.stats``: Statistical analysis routinesFor example, let's take a look at interpolating a smooth curve between some data
###Code
from scipy import interpolate
# choose eight points between 0 and 10
x = np.linspace(0, 10, 8)
y = np.sin(x)
# create a cubic interpolation function
func = interpolate.interp1d(x, y, kind='cubic')
# interpolate on a grid of 1,000 points
x_interp = np.linspace(0, 10, 1000)
y_interp = func(x_interp)
# plot the results
plt.figure() # new figure
plt.plot(x, y, 'o')
plt.plot(x_interp, y_interp);
###Output
_____no_output_____
###Markdown
*This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* A Preview of Data Science Tools If you would like to spring from here and go farther in using Python for scientific computing or data science, there are a few packages that will make your life much easier.This section will introduce and preview several of the more important ones, and give you an idea of the types of applications they are designed for.If you're using the *Anaconda* or *Miniconda* environment suggested at the beginning of this report, you can install the relevant packages with the following command:```$ conda install numpy scipy pandas matplotlib scikit-learn```Let's take a brief look at each of these in turn. NumPy: Numerical PythonNumPy provides an efficient way to store and manipulate multi-dimensional dense arrays in Python.The important features of NumPy are:- It provides an ``ndarray`` structure, which allows efficient storage and manipulation of vectors, matrices, and higher-dimensional datasets.- It provides a readable and efficient syntax for operating on this data, from simple element-wise arithmetic to more complicated linear algebraic operations.In the simplest case, NumPy arrays look a lot like Python lists.For example, here is an array containing the range of numbers 1 to 9 (compare this with Python's built-in ``range()``):
###Code
import numpy as np
x = np.arange(1, 10)
x
###Output
_____no_output_____
###Markdown
NumPy's arrays offer both efficient storage of data, as well as efficient element-wise operations on the data.For example, to square each element of the array, we can apply the "``**``" operator to the array directly:
###Code
x ** 2
###Output
_____no_output_____
###Markdown
Compare this with the much more verbose Python-style list comprehension for the same result:
###Code
[val ** 2 for val in range(1, 10)]
###Output
_____no_output_____
###Markdown
Unlike Python lists (which are limited to one dimension), NumPy arrays can be multi-dimensional.For example, here we will reshape our ``x`` array into a 3x3 array:
###Code
M = x.reshape((3, 3))
M
###Output
_____no_output_____
###Markdown
A two-dimensional array is one representation of a matrix, and NumPy knows how to efficiently do typical matrix operations. For example, you can compute the transpose using ``.T``:
###Code
M.T
###Output
_____no_output_____
###Markdown
or a matrix-vector product using ``np.dot``:
###Code
np.dot(M, [5, 6, 7])
###Output
_____no_output_____
###Markdown
and even more sophisticated operations like eigenvalue decomposition:
###Code
np.linalg.eigvals(M)
###Output
_____no_output_____
###Markdown
Such linear algebraic manipulation underpins much of modern data analysis, particularly when it comes to the fields of machine learning and data mining.For more information on NumPy, see [Resources for Further Learning](16-Further-Resources.ipynb). Pandas: Labeled Column-oriented DataPandas is a much newer package than NumPy, and is in fact built on top of it.What Pandas provides is a labeled interface to multi-dimensional data, in the form of a DataFrame object that will feel very familiar to users of R and related languages.DataFrames in Pandas look something like this:
###Code
import pandas as pd
df = pd.DataFrame(
{"label": ["A", "B", "C", "A", "B", "C"], "value": [1, 2, 3, 4, 5, 6]}
)
df
###Output
_____no_output_____
###Markdown
The Pandas interface allows you to do things like select columns by name:
###Code
df["label"]
###Output
_____no_output_____
###Markdown
Apply string operations across string entries:
###Code
df["label"].str.lower()
###Output
_____no_output_____
###Markdown
Apply aggregates across numerical entries:
###Code
df["value"].sum()
###Output
_____no_output_____
###Markdown
And, perhaps most importantly, do efficient database-style joins and groupings:
###Code
df.groupby("label").sum()
###Output
_____no_output_____
###Markdown
Here in one line we have computed the sum of all objects sharing the same label, something that is much more verbose (and much less efficient) using tools provided in Numpy and core Python.For more information on using Pandas, see [Resources for Further Learning](16-Further-Resources.ipynb). Matplotlib MatLab-style scientific visualizationMatplotlib is currently the most popular scientific visualization packages in Python.Even proponents admit that its interface is sometimes overly verbose, but it is a powerful library for creating a large range of plots.To use Matplotlib, we can start by enabling the notebook mode (for use in the Jupyter notebook) and then importing the package as ``plt``"
###Code
# run this if using Jupyter notebook (Note: not needed anymore)
# %matplotlib notebook
import matplotlib.pyplot as plt
plt.style.use("ggplot") # make graphs in the style of R's ggplot
###Output
_____no_output_____
###Markdown
Now let's create some data (as NumPy arrays, of course) and plot the results:
###Code
x = np.linspace(0, 10) # range of values from 0 to 10
y = np.sin(x) # sine of these values
plt.plot(x, y)
# plot as a line
###Output
_____no_output_____
###Markdown
If you run this code live, you will see an interactive plot that lets you pan, zoom, and scroll to explore the data.This is the simplest example of a Matplotlib plot; for ideas on the wide range of plot types available, see [Matplotlib's online gallery](http://matplotlib.org/gallery.html) as well as other references listed in [Resources for Further Learning](16-Further-Resources.ipynb). SciPy: Scientific PythonSciPy is a collection of scientific functionality that is built on NumPy.The package began as a set of Python wrappers to well-known Fortran libraries for numerical computing, and has grown from there.The package is arranged as a set of submodules, each implementing some class of numerical algorithms.Here is an incomplete sample of some of the more important ones for data science:- ``scipy.fftpack``: Fast Fourier transforms- ``scipy.integrate``: Numerical integration- ``scipy.interpolate``: Numerical interpolation- ``scipy.linalg``: Linear algebra routines- ``scipy.optimize``: Numerical optimization of functions- ``scipy.sparse``: Sparse matrix storage and linear algebra- ``scipy.stats``: Statistical analysis routinesFor example, let's take a look at interpolating a smooth curve between some data
###Code
from scipy import interpolate
# choose eight points between 0 and 10
x = np.linspace(0, 10, 8)
y = np.sin(x)
# create a cubic interpolation function
func = interpolate.interp1d(x, y, kind="cubic")
# interpolate on a grid of 1,000 points
x_interp = np.linspace(0, 10, 1000)
y_interp = func(x_interp)
# plot the results
plt.figure() # new figure
plt.plot(x, y, "o")
plt.plot(x_interp, y_interp);
###Output
_____no_output_____
###Markdown
A Preview of Data Science Tools If you would like to spring from here and go farther in using Python for scientific computing or data science, there are a few packages that will make your life much easier.This section will introduce and preview several of the more important ones, and give you an idea of the types of applications they are designed for.If you're using the *Anaconda* or *Miniconda* environment suggested at the beginning of this report, you can install the relevant packages with the following command:```$ conda install numpy scipy pandas matplotlib scikit-learn```Let's take a brief look at each of these in turn. NumPy: Numerical PythonNumPy provides an efficient way to store and manipulate multi-dimensional dense arrays in Python.The important features of NumPy are:- It provides an ``ndarray`` structure, which allows efficient storage and manipulation of vectors, matrices, and higher-dimensional datasets.- It provides a readable and efficient syntax for operating on this data, from simple element-wise arithmetic to more complicated linear algebraic operations.In the simplest case, NumPy arrays look a lot like Python lists.For example, here is an array containing the range of numbers 1 to 9 (compare this with Python's built-in ``range()``):
###Code
import numpy as np
x = np.arange(1, 10)
x
###Output
_____no_output_____
###Markdown
NumPy's arrays offer both efficient storage of data, as well as efficient element-wise operations on the data.For example, to square each element of the array, we can apply the "``**``" operator to the array directly:
###Code
x ** 2
###Output
_____no_output_____
###Markdown
Compare this with the much more verbose Python-style list comprehension for the same result:
###Code
[val ** 2 for val in range(1, 10)]
###Output
_____no_output_____
###Markdown
Unlike Python lists (which are limited to one dimension), NumPy arrays can be multi-dimensional.For example, here we will reshape our ``x`` array into a 3x3 array:
###Code
M = x.reshape((3, 3))
M
###Output
_____no_output_____
###Markdown
A two-dimensional array is one representation of a matrix, and NumPy knows how to efficiently do typical matrix operations. For example, you can compute the transpose using ``.T``:
###Code
M.T
###Output
_____no_output_____
###Markdown
or a matrix-vector product using ``np.dot``:
###Code
np.dot(M, [5, 6, 7])
###Output
_____no_output_____
###Markdown
and even more sophisticated operations like eigenvalue decomposition:
###Code
np.linalg.eigvals(M)
###Output
_____no_output_____
###Markdown
Such linear algebraic manipulation underpins much of modern data analysis, particularly when it comes to the fields of machine learning and data mining.For more information on NumPy, see [Resources for Further Learning](16-Further-Resources.ipynb). Pandas: Labeled Column-oriented DataPandas is a much newer package than NumPy, and is in fact built on top of it.What Pandas provides is a labeled interface to multi-dimensional data, in the form of a DataFrame object that will feel very familiar to users of R and related languages.DataFrames in Pandas look something like this:
###Code
import pandas as pd
df = pd.DataFrame({'label': ['A', 'B', 'C', 'A', 'B', 'C'],
'value': [1, 2, 3, 4, 5, 6]})
df
###Output
_____no_output_____
###Markdown
The Pandas interface allows you to do things like select columns by name:
###Code
df['label']
###Output
_____no_output_____
###Markdown
Apply string operations across string entries:
###Code
df['label'].str.lower()
###Output
_____no_output_____
###Markdown
Apply aggregates across numerical entries:
###Code
df['value'].sum()
###Output
_____no_output_____
###Markdown
And, perhaps most importantly, do efficient database-style joins and groupings:
###Code
df.groupby('label').sum()
###Output
_____no_output_____
###Markdown
Here in one line we have computed the sum of all objects sharing the same label, something that is much more verbose (and much less efficient) using tools provided in Numpy and core Python.For more information on using Pandas, see [Resources for Further Learning](16-Further-Resources.ipynb). Matplotlib MatLab-style scientific visualizationMatplotlib is currently the most popular scientific visualization packages in Python.Even proponents admit that its interface is sometimes overly verbose, but it is a powerful library for creating a large range of plots.To use Matplotlib, we can start by enabling the notebook mode (for use in the Jupyter notebook) and then importing the package as ``plt``"
###Code
# run this if using Jupyter notebook
%matplotlib notebook
import matplotlib.pyplot as plt
plt.style.use('ggplot') # make graphs in the style of R's ggplot
###Output
_____no_output_____
###Markdown
Now let's create some data (as NumPy arrays, of course) and plot the results:
###Code
x = np.linspace(0, 10) # range of values from 0 to 10
y = np.sin(x) # sine of these values
plt.plot(x, y); # plot as a line
###Output
_____no_output_____
###Markdown
If you run this code live, you will see an interactive plot that lets you pan, zoom, and scroll to explore the data.This is the simplest example of a Matplotlib plot; for ideas on the wide range of plot types available, see [Matplotlib's online gallery](http://matplotlib.org/gallery.html) as well as other references listed in [Resources for Further Learning](16-Further-Resources.ipynb). SciPy: Scientific PythonSciPy is a collection of scientific functionality that is built on NumPy.The package began as a set of Python wrappers to well-known Fortran libraries for numerical computing, and has grown from there.The package is arranged as a set of submodules, each implementing some class of numerical algorithms.Here is an incomplete sample of some of the more important ones for data science:- ``scipy.fftpack``: Fast Fourier transforms- ``scipy.integrate``: Numerical integration- ``scipy.interpolate``: Numerical interpolation- ``scipy.linalg``: Linear algebra routines- ``scipy.optimize``: Numerical optimization of functions- ``scipy.sparse``: Sparse matrix storage and linear algebra- ``scipy.stats``: Statistical analysis routinesFor example, let's take a look at interpolating a smooth curve between some data
###Code
from scipy import interpolate
# choose eight points between 0 and 10
x = np.linspace(0, 10, 8)
y = np.sin(x)
# create a cubic interpolation function
func = interpolate.interp1d(x, y, kind='cubic')
# interpolate on a grid of 1,000 points
x_interp = np.linspace(0, 10, 1000)
y_interp = func(x_interp)
# plot the results
plt.figure() # new figure
plt.plot(x, y, 'o')
plt.plot(x_interp, y_interp);
###Output
_____no_output_____
###Markdown
A Preview of Data Science ToolsIf you would like to spring from here and go farther in using Python for scientific computing or data science, there are a few packages that will make your life much easier.This section will introduce and preview several of the more important ones, and give you an idea of the types of applications they are designed for.If you're using the *Anaconda* environment suggested at the beginning of this report, you can install the relevant packages with the following command:```$ conda install numpy scipy pandas matplotlib scikit-learn```Let's take a brief look at each of these in turn. ``numpy``: Numerical PythonNumPy provides an efficient way to store and manipulate multi-dimensional dense arrays in Python.The important features of ``numpy`` are:- ``numpy`` provides an ``ndarray`` structure, which allows efficient storage and manipulation of vectors, matrices, and higher-dimensional datasets.- ``numpy`` provides a readable and efficient syntax for operating on this data, from simple element-wise arithmetic to more complicated linear algebraic operations.In the simplest case, numpy arrays look a lot like Python lists.For example, here is an array containing the range of numbers 1 to 9 (compare this with Python's built-in ``range()``):
###Code
import numpy as np
x = np.arange(1, 10)
x
###Output
_____no_output_____
###Markdown
NumPy's arrays offer both efficient storage of data, as well as efficient elementwise operations on the data.For example, to square each element of the array, we can apply the "``**``" operator to the array directly:
###Code
x ** 2
###Output
_____no_output_____
###Markdown
Compare this with the much more verbose Python-style list comprehension for the same result:
###Code
[val ** 2 for val in range(1, 10)]
###Output
_____no_output_____
###Markdown
Unlike Python lists which are limited to one dimension, NumPy arrays can be multi-dimensional.For example, here we will reshape our ``x`` array into a 3x3 array:
###Code
M = x.reshape((3, 3))
M
###Output
_____no_output_____
###Markdown
A two-dimensional array is one representation of a matrix, and NumPy knows how to efficiently do typical matrix operations. For example, you can compute the transpose using ``.T``
###Code
M.T
###Output
_____no_output_____
###Markdown
or a matrix-vector product using ``np.dot``
###Code
np.dot(M, [5, 6, 7])
###Output
_____no_output_____
###Markdown
and even more sophisticated operations like eigenvalue decomposition:
###Code
np.linalg.eigvals(M)
###Output
_____no_output_____
###Markdown
Such linear algebraic manipulation underpins much of modern data analysis, particularly when it comes to the fields of machine learning and data mining.For more information on NumPy, see the resources listed at the end of this report. ``pandas``: Labeled Column-oriented DataPandas is a much newer package than ``numpy``, and is in fact built on top of it.What Pandas provides is a labeled interface to multi-dimensional data, in the form of a DataFrame object that will feel very familiar to users of R and related languages.Dataframes in Pandas look something like this:
###Code
import pandas as pd
df = pd.DataFrame({'label': ['A', 'B', 'C', 'A', 'B', 'C'],
'value': [1, 2, 3, 4, 5, 6]})
df
###Output
_____no_output_____
###Markdown
The pandas interface allows you to do things like select columns by name:
###Code
df['label']
###Output
_____no_output_____
###Markdown
Apply string operations across string entries:
###Code
df['label'].str.lower()
###Output
_____no_output_____
###Markdown
Apply aggregates across numerical entries:
###Code
df['value'].sum()
###Output
_____no_output_____
###Markdown
And, perhaps most importantly, do efficient database-style joins and groupings:
###Code
df.groupby('label').sum()
###Output
_____no_output_____
###Markdown
Here in one line we have computed the sum of all objects sharing the same label, something that is much more verbose (and much less efficient) using tools provided in Numpy and core Python.For more information on using Pandas, see the resources at the end of this report. ``matplotlib``: MatLab-style scientific visualizationMatplotlib is currently the most popular scientific visualization packages in Python.Even proponents admit that its interface is sometimes overly verbose, but it is a powerful library for creating a large range of plots.To use matplotlib, we can start by enabling the notebook mode (for use in the Jupyter notebook) and then importing the package as ``plt``"
###Code
# run this if using Jupyter notebook
%matplotlib notebook
import matplotlib.pyplot as plt
plt.style.use('ggplot') # make graphs in the style of R's ggplot
###Output
_____no_output_____
###Markdown
Now let's create some data (as numpy arrays, of course) and plot the results:
###Code
x = np.linspace(0, 10) # range of values from 0 to 10
y = np.sin(x) # sine of these values
plt.plot(x, y); # plot as a line
###Output
_____no_output_____
###Markdown
If you run this code live, you will see an interactive plot that lets you pan, zoom, and scroll to explore the data.This is the simplest example of a matplotlib plot; for ideas on the wide range of plot types available, see [matplotlib's online gallery](http://matplotlib.org/gallery.html) as well as other references at the end of this report. ``scipy``: Scientific PythonSciPy is a collection of scientific functionality that is built on numpy.The package began as a set of Python wrappers to well-known Fortran libraries for numerical computing, and has grown from there.The package is arranged as a set of submodules, each implementing some class of numerical algorithms.here is an incomplete sample of some of the more important ones for data science:- ``scipy.fftpack``: Fast Fourier Transforms- ``scipy.integrate``: Numerical Integration- ``scipy.interpolate``: Numerical Interpolation- ``scipy.linalg``: Linear algebra routines- ``scipy.optimize``: Numerical optimization of functions- ``scipy.sparse``: Sparse matrix storage and linear algebra- ``scipy.stats``: Statistical analysis routinesFor example, let's take a look at interpolating a smooth curve between some data
###Code
from scipy import interpolate
# choose 8 points between 0 and 10
x = np.linspace(0, 10, 8)
y = np.sin(x)
# create a cubic interpolation function
func = interpolate.interp1d(x, y, kind='cubic')
# interpolate on a grid of 1000 points
x_interp = np.linspace(0, 10, 1000)
y_interp = func(x_interp)
# plot the results
plt.figure() # new figure
plt.plot(x, y, 'o')
plt.plot(x_interp, y_interp);
###Output
_____no_output_____
###Markdown
*This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* データサイエンスツールのプレビュー A Preview of Data Science Tools ここから出て、科学計算やデータサイエンスにPythonをさらに使用したい場合は、生活をはるかに簡単にするいくつかのパッケージがあります。このセクションでは、いくつかの重要なアプリケーションを紹介してプレビューし、それらが設計されているアプリケーションのタイプについて説明します。このレポートの冒頭で提案されている**Anaconda**または**Miniconda**環境を使用している場合は、次のコマンドで関連パッケージをインストールできます。```$ conda install numpy scipy pandas matplotlib scikit-learn```これらのそれぞれについて順に見ていきましょう。If you would like to spring from here and go farther in using Python for scientific computing or data science, there are a few packages that will make your life much easier.This section will introduce and preview several of the more important ones, and give you an idea of the types of applications they are designed for.If you're using the **Anaconda** or **Miniconda** environment suggested at the beginning of this report, you can install the relevant packages with the following command:```$ conda install numpy scipy pandas matplotlib scikit-learn```Let's take a brief look at each of these in turn. NumPy:数値PythonNumPyは、Pythonで多次元の密配列を保存および操作する効率的な方法を提供します。NumPyの重要な機能は次のとおりです。- これは、ベクトル、行列、およびより高次元のデータセットの効率的な格納と操作を可能にする``ndarray``構造を提供します。- 単純な要素ごとの算術からより複雑な線形代数演算まで、このデータを操作するための読みやすく効率的な構文を提供します。最も単純なケースでは、NumPy配列はPythonリストによく似ています。たとえば、これは1から9までの数値の範囲を含む配列です(これをPythonの組み込み ``range()``と比較してください): NumPy: Numerical PythonNumPy provides an efficient way to store and manipulate multi-dimensional dense arrays in Python.The important features of NumPy are:- It provides an ``ndarray`` structure, which allows efficient storage and manipulation of vectors, matrices, and higher-dimensional datasets.- It provides a readable and efficient syntax for operating on this data, from simple element-wise arithmetic to more complicated linear algebraic operations.In the simplest case, NumPy arrays look a lot like Python lists.For example, here is an array containing the range of numbers 1 to 9 (compare this with Python's built-in ``range()``):
###Code
import numpy as np
x = np.arange(1, 10)
x
###Output
_____no_output_____
###Markdown
NumPyの配列は、データの効率的なストレージと、データに対する効率的な要素ごとの演算の両方を提供します。 たとえば、配列の各要素を二乗するには、 "``**``"演算子を配列に直接適用します。NumPy's arrays offer both efficient storage of data, as well as efficient element-wise operations on the data.For example, to square each element of the array, we can apply the "``**``" operator to the array directly:
###Code
x ** 2
###Output
_____no_output_____
###Markdown
同じ結果を得るために、これをより詳細なPythonスタイルのリスト内包と比較してください。Compare this with the much more verbose Python-style list comprehension for the same result:
###Code
[val ** 2 for val in range(1, 10)]
###Output
_____no_output_____
###Markdown
Pythonリスト(1次元に制限されている)とは異なり、NumPy配列は多次元にすることができます。たとえば、ここでは ``x``配列を3x3配列に再形成します:Unlike Python lists (which are limited to one dimension), NumPy arrays can be multi-dimensional.For example, here we will reshape our ``x`` array into a 3x3 array:
###Code
M = x.reshape((3, 3))
M
###Output
_____no_output_____
###Markdown
2次元配列は行列の1つの表現であり、NumPyは典型的な行列演算を効率的に行う方法を知っています。 たとえば、``.T``を使用して転置を計算できます:A two-dimensional array is one representation of a matrix, and NumPy knows how to efficiently do typical matrix operations. For example, you can compute the transpose using ``.T``:
###Code
M.T
###Output
_____no_output_____
###Markdown
または ``np.dot``を使用した行列とベクトルの積:or a matrix-vector product using ``np.dot``:
###Code
np.dot(M, [5, 6, 7])
###Output
_____no_output_____
###Markdown
そして固有値分解のようなさらに洗練された操作:and even more sophisticated operations like eigenvalue decomposition:
###Code
np.linalg.eigvals(M) #行列の固有値を返す。https://python.atelierkobato.com/eigenvalue/
###Output
_____no_output_____
###Markdown
このような線形代数操作は、特に機械学習とデータマイニングの分野に関して、現代のデータ分析の多くを支えています。NumPyの詳細については、[詳細なリソース](16-Further-Resources.ipynb)を参照してください。Such linear algebraic manipulation underpins much of modern data analysis, particularly when it comes to the fields of machine learning and data mining.For more information on NumPy, see [Resources for Further Learning](16-Further-Resources.ipynb). Pandas:ラベル付けされた列指向のデータPandasはNumPyよりもはるかに新しいパッケージであり、実際にその上に構築されています。Pandasが提供するのは、Rおよび関連言語のユーザーに非常に馴染みのあるDataFrameオブジェクトの形式の、多次元データへのラベル付きインターフェースです。Pandasのデータフレームは次のようになります。 Pandas: Labeled Column-oriented DataPandas is a much newer package than NumPy, and is in fact built on top of it.What Pandas provides is a labeled interface to multi-dimensional data, in the form of a DataFrame object that will feel very familiar to users of R and related languages.DataFrames in Pandas look something like this:
###Code
import pandas as pd
df = pd.DataFrame({'label': ['A', 'B', 'C', 'A', 'B', 'C'],
'value': [1, 2, 3, 4, 5, 6]})
df
###Output
_____no_output_____
###Markdown
Pandasインターフェースを使用すると、列を名前で選択することができます。The Pandas interface allows you to do things like select columns by name:
###Code
df['label']
###Output
_____no_output_____
###Markdown
文字列エントリ全体に文字列操作を適用します。Apply string operations across string entries:
###Code
df['label'].str.lower()
###Output
_____no_output_____
###Markdown
数値エントリ全体に集計を適用します。Apply aggregates across numerical entries:
###Code
df['value'].sum()
###Output
_____no_output_____
###Markdown
そして、おそらく最も重要なことは、効率的なデータベース形式の結合とグループ化を行うことです。And, perhaps most importantly, do efficient database-style joins and groupings:
###Code
df.groupby('label').sum()
###Output
_____no_output_____
###Markdown
ここでは、1行で、同じラベルを共有するすべてのオブジェクトの合計を計算しました。これは、NumpyおよびコアPythonで提供されるツールを使用して、はるかに冗長(かつ効率がはるかに低い)です。Pandasの使用の詳細については、[詳細なリソース](16-Further-Resources.ipynb)を参照してください。Here in one line we have computed the sum of all objects sharing the same label, something that is much more verbose (and much less efficient) using tools provided in Numpy and core Python.For more information on using Pandas, see [Resources for Further Learning](16-Further-Resources.ipynb). Matplotlib MatLabスタイルの科学的可視化Matplotlibは現在、Pythonで最も人気のある科学的可視化パッケージです。支持者でさえ、そのインターフェースが時々過度に冗長であることを認めますが、それは広範囲のプロットを作成するための強力なライブラリーです。Matplotlibを使用するには、(Jupyterノートブックで使用するために)ノートブックモードを有効にしてから、パッケージを "``plt``"としてインポートします Matplotlib MatLab-style scientific visualizationMatplotlib is currently the most popular scientific visualization packages in Python.Even proponents admit that its interface is sometimes overly verbose, but it is a powerful library for creating a large range of plots.To use Matplotlib, we can start by enabling the notebook mode (for use in the Jupyter notebook) and then importing the package as "``plt``"
###Code
# run this if using Jupyter notebook
#%matplotlib notebook
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot') # make graphs in the style of R's ggplot
###Output
_____no_output_____
###Markdown
次に、いくつかのデータを(もちろんNumPy配列として)作成し、結果をプロットします。Now let's create some data (as NumPy arrays, of course) and plot the results:
###Code
x = np.linspace(0, 10) # range of values from 0 to 10
y = np.sin(x) # sine of these values
plt.plot(x, y); # plot as a line
###Output
_____no_output_____
###Markdown
このコードをライブで実行すると、パン、ズーム、スクロールしてデータを探索できるインタラクティブなプロットが表示されます。これはMatplotlibプロットの最も単純な例です。 利用可能な広範なプロットタイプのアイデアについては、[Matplotlibのオンラインギャラリー](http://matplotlib.org/gallery.html)と、[詳細なリソース](16-Further-Resources.ipynb)にリストされているその他の参考資料を参照してください。If you run this code live, you will see an interactive plot that lets you pan, zoom, and scroll to explore the data.This is the simplest example of a Matplotlib plot; for ideas on the wide range of plot types available, see [Matplotlib's online gallery](http://matplotlib.org/gallery.html) as well as other references listed in [Resources for Further Learning](16-Further-Resources.ipynb). SciPy:Scientific PythonSciPyは、NumPyに基づいて構築された科学機能のコレクションです。パッケージは、数値計算用の有名なFortranライブラリへのPythonラッパーのセットとして始まり、そこから成長しました。パッケージは、サブモジュールのセットとして配置され、それぞれがいくつかのクラスの数値アルゴリズムを実装します。以下は、データサイエンスにとってより重要ないくつかの不完全なサンプルです。- ``scipy.fftpack``:高速フーリエ変換- ``scipy.integrate``:数値積分- ``scipy.interpolate``:数値補間- ``scipy.linalg``:線形代数ルーチン- ``scipy.optimize``:関数の数値最適化- ``scipy.sparse``:スパース行列のストレージと線形代数- ``scipy.stats``:統計分析ルーチンたとえば、いくつかのデータ間の滑らかな曲線の補間を見てみましょう。 SciPy: Scientific PythonSciPy is a collection of scientific functionality that is built on NumPy.The package began as a set of Python wrappers to well-known Fortran libraries for numerical computing, and has grown from there.The package is arranged as a set of submodules, each implementing some class of numerical algorithms.Here is an incomplete sample of some of the more important ones for data science:- ``scipy.fftpack``: Fast Fourier transforms- ``scipy.integrate``: Numerical integration- ``scipy.interpolate``: Numerical interpolation- ``scipy.linalg``: Linear algebra routines- ``scipy.optimize``: Numerical optimization of functions- ``scipy.sparse``: Sparse matrix storage and linear algebra- ``scipy.stats``: Statistical analysis routinesFor example, let's take a look at interpolating a smooth curve between some data
###Code
from scipy import interpolate
# choose eight points between 0 and 10
x = np.linspace(0, 10, 8)
y = np.sin(x)
# create a cubic interpolation function
func = interpolate.interp1d(x, y, kind='cubic')
# interpolate on a grid of 1,000 points
x_interp = np.linspace(0, 10, 1000)
y_interp = func(x_interp)
# plot the results
plt.figure() # new figure
plt.plot(x, y, 'o')
plt.plot(x_interp, y_interp);
###Output
_____no_output_____
###Markdown
*This notebook contains an excerpt from the [Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/WhirlwindTourOfPython).**The text and code are released under the [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE) license; see also the companion project, the [Python Data Science Handbook](https://github.com/jakevdp/PythonDataScienceHandbook).* A Preview of Data Science Tools If you would like to spring from here and go farther in using Python for scientific computing or data science, there are a few packages that will make your life much easier.This section will introduce and preview several of the more important ones, and give you an idea of the types of applications they are designed for.If you're using the *Anaconda* or *Miniconda* environment suggested at the beginning of this report, you can install the relevant packages with the following command:```$ conda install numpy scipy pandas matplotlib scikit-learn```Let's take a brief look at each of these in turn. NumPy: Numerical PythonNumPy provides an efficient way to store and manipulate multi-dimensional dense arrays in Python.The important features of NumPy are:- It provides an ``ndarray`` structure, which allows efficient storage and manipulation of vectors, matrices, and higher-dimensional datasets.- It provides a readable and efficient syntax for operating on this data, from simple element-wise arithmetic to more complicated linear algebraic operations.In the simplest case, NumPy arrays look a lot like Python lists.For example, here is an array containing the range of numbers 1 to 9 (compare this with Python's built-in ``range()``):
###Code
import numpy as np
x = np.arange(1, 10)
x
###Output
_____no_output_____
###Markdown
NumPy's arrays offer both efficient storage of data, as well as efficient element-wise operations on the data.For example, to square each element of the array, we can apply the "``**``" operator to the array directly:
###Code
x ** 2
###Output
_____no_output_____
###Markdown
Compare this with the much more verbose Python-style list comprehension for the same result:
###Code
[val ** 2 for val in range(1, 10)]
###Output
_____no_output_____
###Markdown
Unlike Python lists (which are limited to one dimension), NumPy arrays can be multi-dimensional.For example, here we will reshape our ``x`` array into a 3x3 array:
###Code
M = x.reshape((3, 3))
M
###Output
_____no_output_____
###Markdown
A two-dimensional array is one representation of a matrix, and NumPy knows how to efficiently do typical matrix operations. For example, you can compute the transpose using ``.T``:
###Code
M.T
###Output
_____no_output_____
###Markdown
or a matrix-vector product using ``np.dot``:
###Code
np.dot(M, [5, 6, 7])
###Output
_____no_output_____
###Markdown
and even more sophisticated operations like eigenvalue decomposition:
###Code
np.linalg.eigvals(M)
###Output
_____no_output_____
###Markdown
Such linear algebraic manipulation underpins much of modern data analysis, particularly when it comes to the fields of machine learning and data mining.For more information on NumPy, see [Resources for Further Learning](16-Further-Resources.ipynb). Pandas: Labeled Column-oriented DataPandas is a much newer package than NumPy, and is in fact built on top of it.What Pandas provides is a labeled interface to multi-dimensional data, in the form of a DataFrame object that will feel very familiar to users of R and related languages.DataFrames in Pandas look something like this:
###Code
import pandas as pd
df = pd.DataFrame({'label': ['A', 'B', 'C', 'A', 'B', 'C'],
'value': [1, 2, 3, 4, 5, 6]})
df
###Output
_____no_output_____
###Markdown
The Pandas interface allows you to do things like select columns by name:
###Code
df['label']
###Output
_____no_output_____
###Markdown
Apply string operations across string entries:
###Code
df['label'].str.lower()
###Output
_____no_output_____
###Markdown
Apply aggregates across numerical entries:
###Code
df['value'].sum()
###Output
_____no_output_____
###Markdown
And, perhaps most importantly, do efficient database-style joins and groupings:
###Code
df.groupby('label').sum()
###Output
_____no_output_____
###Markdown
Here in one line we have computed the sum of all objects sharing the same label, something that is much more verbose (and much less efficient) using tools provided in Numpy and core Python.For more information on using Pandas, see [Resources for Further Learning](16-Further-Resources.ipynb). Matplotlib MatLab-style scientific visualizationMatplotlib is currently the most popular scientific visualization packages in Python.Even proponents admit that its interface is sometimes overly verbose, but it is a powerful library for creating a large range of plots.To use Matplotlib, we can start by enabling the notebook mode (for use in the Jupyter notebook) and then importing the package as ``plt``"
###Code
# run this if using Jupyter notebook
%matplotlib notebook
import matplotlib.pyplot as plt
plt.style.use('ggplot') # make graphs in the style of R's ggplot
###Output
_____no_output_____
###Markdown
Now let's create some data (as NumPy arrays, of course) and plot the results:
###Code
x = np.linspace(0, 10) # range of values from 0 to 10
y = np.sin(x) # sine of these values
plt.plot(x, y); # plot as a line
###Output
_____no_output_____
###Markdown
If you run this code live, you will see an interactive plot that lets you pan, zoom, and scroll to explore the data.This is the simplest example of a Matplotlib plot; for ideas on the wide range of plot types available, see [Matplotlib's online gallery](http://matplotlib.org/gallery.html) as well as other references listed in [Resources for Further Learning](16-Further-Resources.ipynb). SciPy: Scientific PythonSciPy is a collection of scientific functionality that is built on NumPy.The package began as a set of Python wrappers to well-known Fortran libraries for numerical computing, and has grown from there.The package is arranged as a set of submodules, each implementing some class of numerical algorithms.Here is an incomplete sample of some of the more important ones for data science:- ``scipy.fftpack``: Fast Fourier transforms- ``scipy.integrate``: Numerical integration- ``scipy.interpolate``: Numerical interpolation- ``scipy.linalg``: Linear algebra routines- ``scipy.optimize``: Numerical optimization of functions- ``scipy.sparse``: Sparse matrix storage and linear algebra- ``scipy.stats``: Statistical analysis routinesFor example, let's take a look at interpolating a smooth curve between some data
###Code
from scipy import interpolate
# choose eight points between 0 and 10
x = np.linspace(0, 10, 8)
y = np.sin(x)
# create a cubic interpolation function
func = interpolate.interp1d(x, y, kind='cubic')
# interpolate on a grid of 1,000 points
x_interp = np.linspace(0, 10, 1000)
y_interp = func(x_interp)
# plot the results
plt.figure() # new figure
plt.plot(x, y, 'o')
plt.plot(x_interp, y_interp);
###Output
_____no_output_____
###Markdown
*This notebook comes from [A Whirlwind Tour of Python](http://www.oreilly.com/programming/free/a-whirlwind-tour-of-python.csp) by Jake VanderPlas (OReilly Media, 2016). This content is licensed [CC0](https://github.com/jakevdp/WhirlwindTourOfPython/blob/master/LICENSE). The full notebook listing is available at https://github.com/jakevdp/WhirlwindTourOfPython.* A Preview of Data Science ToolsIf you would like to spring from here and go farther in using Python for scientific computing or data science, there are a few packages that will make your life much easier.This section will introduce and preview several of the more important ones, and give you an idea of the types of applications they are designed for.If you're using the *Anaconda* or *Miniconda* environment suggested at the beginning of this report, you can install the relevant packages with the following command:```$ conda install numpy scipy pandas matplotlib scikit-learn```Let's take a brief look at each of these in turn. NumPy: Numerical PythonNumPy provides an efficient way to store and manipulate multi-dimensional dense arrays in Python.The important features of NumPy are:- It provides an ``ndarray`` structure, which allows efficient storage and manipulation of vectors, matrices, and higher-dimensional datasets.- It provides a readable and efficient syntax for operating on this data, from simple element-wise arithmetic to more complicated linear algebraic operations.In the simplest case, NumPy arrays look a lot like Python lists.For example, here is an array containing the range of numbers 1 to 9 (compare this with Python's built-in ``range()``):
###Code
import numpy as np
x = np.arange(1, 10)
x
###Output
_____no_output_____
###Markdown
NumPy's arrays offer both efficient storage of data, as well as efficient element-wise operations on the data.For example, to square each element of the array, we can apply the "``**``" operator to the array directly:
###Code
x ** 2
###Output
_____no_output_____
###Markdown
Compare this with the much more verbose Python-style list comprehension for the same result:
###Code
[val ** 2 for val in range(1, 10)]
###Output
_____no_output_____
###Markdown
Unlike Python lists (which are limited to one dimension), NumPy arrays can be multi-dimensional.For example, here we will reshape our ``x`` array into a 3x3 array:
###Code
M = x.reshape((3, 3))
M
###Output
_____no_output_____
###Markdown
A two-dimensional array is one representation of a matrix, and NumPy knows how to efficiently do typical matrix operations. For example, you can compute the transpose using ``.T``:
###Code
M.T
###Output
_____no_output_____
###Markdown
or a matrix-vector product using ``np.dot``:
###Code
np.dot(M, [5, 6, 7])
###Output
_____no_output_____
###Markdown
and even more sophisticated operations like eigenvalue decomposition:
###Code
np.linalg.eigvals(M)
###Output
_____no_output_____
###Markdown
Such linear algebraic manipulation underpins much of modern data analysis, particularly when it comes to the fields of machine learning and data mining.For more information on NumPy, see [Resources for Further Learning](16-Further-Resources.ipynb). Pandas: Labeled Column-oriented DataPandas is a much newer package than NumPy, and is in fact built on top of it.What Pandas provides is a labeled interface to multi-dimensional data, in the form of a DataFrame object that will feel very familiar to users of R and related languages.DataFrames in Pandas look something like this:
###Code
import pandas as pd
df = pd.DataFrame({'label': ['A', 'B', 'C', 'A', 'B', 'C'],
'value': [1, 2, 3, 4, 5, 6]})
df
###Output
_____no_output_____
###Markdown
The Pandas interface allows you to do things like select columns by name:
###Code
df['label']
###Output
_____no_output_____
###Markdown
Apply string operations across string entries:
###Code
df['label'].str.lower()
###Output
_____no_output_____
###Markdown
Apply aggregates across numerical entries:
###Code
df['value'].sum()
###Output
_____no_output_____
###Markdown
And, perhaps most importantly, do efficient database-style joins and groupings:
###Code
df.groupby('label').sum()
###Output
_____no_output_____
###Markdown
Here in one line we have computed the sum of all objects sharing the same label, something that is much more verbose (and much less efficient) using tools provided in Numpy and core Python.For more information on using Pandas, see [Resources for Further Learning](16-Further-Resources.ipynb). Matplotlib MatLab-style scientific visualizationMatplotlib is currently the most popular scientific visualization packages in Python.Even proponents admit that its interface is sometimes overly verbose, but it is a powerful library for creating a large range of plots.To use Matplotlib, we can start by enabling the notebook mode (for use in the Jupyter notebook) and then importing the package as ``plt``"
###Code
# run this if using Jupyter notebook
%matplotlib notebook
import matplotlib.pyplot as plt
plt.style.use('ggplot') # make graphs in the style of R's ggplot
###Output
_____no_output_____
###Markdown
Now let's create some data (as NumPy arrays, of course) and plot the results:
###Code
x = np.linspace(0, 10) # range of values from 0 to 10
y = np.sin(x) # sine of these values
plt.plot(x, y); # plot as a line
###Output
_____no_output_____
###Markdown
If you run this code live, you will see an interactive plot that lets you pan, zoom, and scroll to explore the data.This is the simplest example of a Matplotlib plot; for ideas on the wide range of plot types available, see [Matplotlib's online gallery](http://matplotlib.org/gallery.html) as well as other references listed in [Resources for Further Learning](16-Further-Resources.ipynb). SciPy: Scientific PythonSciPy is a collection of scientific functionality that is built on NumPy.The package began as a set of Python wrappers to well-known Fortran libraries for numerical computing, and has grown from there.The package is arranged as a set of submodules, each implementing some class of numerical algorithms.Here is an incomplete sample of some of the more important ones for data science:- ``scipy.fftpack``: Fast Fourier transforms- ``scipy.integrate``: Numerical integration- ``scipy.interpolate``: Numerical interpolation- ``scipy.linalg``: Linear algebra routines- ``scipy.optimize``: Numerical optimization of functions- ``scipy.sparse``: Sparse matrix storage and linear algebra- ``scipy.stats``: Statistical analysis routinesFor example, let's take a look at interpolating a smooth curve between some data
###Code
from scipy import interpolate
# choose eight points between 0 and 10
x = np.linspace(0, 10, 8)
y = np.sin(x)
# create a cubic interpolation function
func = interpolate.interp1d(x, y, kind='cubic')
# interpolate on a grid of 1,000 points
x_interp = np.linspace(0, 10, 1000)
y_interp = func(x_interp)
# plot the results
plt.figure() # new figure
plt.plot(x, y, 'o')
plt.plot(x_interp, y_interp);
###Output
_____no_output_____ |
Basics/02-DL-Basics-Function.ipynb | ###Markdown
Introduction - Deep Learning Classical programming is all about creating a function that helps us to process input data and get the desired output.In the learning paradigm, we change the process so that given a set of examples of input data and desired output, we aim to learn the function that can process the data.- In machine learning, we end up handcrafting the features and then learn the function to get the desired output- In deep learning, we want to both learn the features and the function together to get the desired output Theory of Deep Learning We will start with why deep learning works and explain the basis of Universal ApproximationLet us take a non-linear function - a saddle function$$ Z = 2X^2 - 3Y^2 + 1 + \epsilon $$ Problem: A Noisy Function
###Code
import sys
sys.path.append("../")
# DL & Numerical Library
import numpy as np
import keras
# Visualisation Library & Helpers
import matplotlib.pyplot as plt
%matplotlib inline
from reco import vis
x = np.arange(-1,1,0.01)
y = np.arange(-1,1,0.01)
X, Y = np.meshgrid(x, y)
c = np.ones((200,200))
e = np.random.rand(200,200)*0.1
Z = 2*X*X - 3*Y*Y + 5*c + e
vis.plot3d(X,Y,Z)
###Output
_____no_output_____
###Markdown
Using Neural Network Step 0: Load the Keras Model
###Code
from keras.models import Sequential, Model
from keras.layers import Dense, Input, Concatenate
###Output
_____no_output_____
###Markdown
Step 1: Create the input and output
###Code
input_xy = np.c_[X.reshape(-1),Y.reshape(-1)]
output_z = Z.reshape(-1)
input_x = X.reshape(-1)
input_y = Y.reshape(-1)
output_z.shape, input_xy.shape
###Output
_____no_output_____
###Markdown
Step 2: Create the Transformation & Prediction Model
###Code
def dl_model():
x_input = Input(shape=[1], name='X')
y_input = Input(shape=[1], name="Y")
xy_input = Concatenate(name="Concat")([x_input, y_input])
Dense_1 = Dense(32, activation="relu", name="Dense1")(xy_input)
Dense_2 = Dense(4, activation="relu", name="Dense2")(Dense_1)
z_output = Dense(1, name="Z")(Dense_2)
model = Model([x_input, y_input], z_output)
model.compile(loss='mean_squared_error', optimizer="sgd", metrics=["mse"])
return model
model = dl_model()
model.summary()
###Output
Model: "model_3"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
X (InputLayer) (None, 1) 0
__________________________________________________________________________________________________
Y (InputLayer) (None, 1) 0
__________________________________________________________________________________________________
Concat (Concatenate) (None, 2) 0 X[0][0]
Y[0][0]
__________________________________________________________________________________________________
Dense1 (Dense) (None, 32) 96 Concat[0][0]
__________________________________________________________________________________________________
Dense2 (Dense) (None, 4) 132 Dense1[0][0]
__________________________________________________________________________________________________
Z (Dense) (None, 1) 5 Dense2[0][0]
==================================================================================================
Total params: 233
Trainable params: 233
Non-trainable params: 0
__________________________________________________________________________________________________
###Markdown
Step 3: Compile the Model - Loss, Optimizer and Fit the Model
###Code
model.compile(loss='mean_squared_error', optimizer="sgd", metrics=["mse"])
%%time
output = model.fit([input_x, input_y], output_z, epochs=10, validation_split=0.2, shuffle=True, verbose=1)
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1033: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1020: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.
Train on 32000 samples, validate on 8000 samples
Epoch 1/10
32000/32000 [==============================] - 1s 34us/step - loss: 0.2778 - mean_squared_error: 0.2778 - val_loss: 1.0657 - val_mean_squared_error: 1.0657
Epoch 2/10
32000/32000 [==============================] - 1s 26us/step - loss: 0.0116 - mean_squared_error: 0.0116 - val_loss: 0.6938 - val_mean_squared_error: 0.6938
Epoch 3/10
32000/32000 [==============================] - 1s 26us/step - loss: 0.0056 - mean_squared_error: 0.0056 - val_loss: 0.5357 - val_mean_squared_error: 0.5357
Epoch 4/10
32000/32000 [==============================] - 1s 26us/step - loss: 0.0037 - mean_squared_error: 0.0037 - val_loss: 0.4700 - val_mean_squared_error: 0.4700
Epoch 5/10
32000/32000 [==============================] - 1s 26us/step - loss: 0.0029 - mean_squared_error: 0.0029 - val_loss: 0.4103 - val_mean_squared_error: 0.4103
Epoch 6/10
32000/32000 [==============================] - 1s 26us/step - loss: 0.0024 - mean_squared_error: 0.0024 - val_loss: 0.3862 - val_mean_squared_error: 0.3862
Epoch 7/10
32000/32000 [==============================] - 1s 26us/step - loss: 0.0022 - mean_squared_error: 0.0022 - val_loss: 0.3486 - val_mean_squared_error: 0.3486
Epoch 8/10
32000/32000 [==============================] - 1s 26us/step - loss: 0.0020 - mean_squared_error: 0.0020 - val_loss: 0.3230 - val_mean_squared_error: 0.3230
Epoch 9/10
32000/32000 [==============================] - 1s 26us/step - loss: 0.0019 - mean_squared_error: 0.0019 - val_loss: 0.3069 - val_mean_squared_error: 0.3069
Epoch 10/10
32000/32000 [==============================] - 1s 26us/step - loss: 0.0018 - mean_squared_error: 0.0018 - val_loss: 0.2924 - val_mean_squared_error: 0.2924
CPU times: user 12.2 s, sys: 1.12 s, total: 13.3 s
Wall time: 8.7 s
###Markdown
Step 4: Evaluate Model Performance
###Code
vis.metrics(output.history)
###Output
_____no_output_____
###Markdown
Step 5: Make Prediction from the model
###Code
Z_pred = model.predict([input_x, input_y]).reshape(200,200)
vis.plot3d(X,Y,Z_pred)
###Output
_____no_output_____ |
colabs/dynamic_costs.ipynb | ###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Dynamic Costs Reporting ParametersCalculate DV360 cost at the dynamic creative combination level. 1. Add a sheet URL. This is where you will enter advertiser and campaign level details. 1. Specify the CM network ID. 1. Click run now once, and a tab called Dynamic Costs will be added to the sheet with instructions. 1. Follow the instructions on the sheet; this will be your configuration. 1. StarThinker will create two or three (depending on the case) reports in CM named Dynamic Costs - .... 1. Wait for BigQuery->->->Dynamic_Costs_Analysis to be created or click Run Now. 1. Copy Dynamic Costs Sample Data ( Copy From This ). 1. Click Edit Connection, and Change to BigQuery->->->Dynamic_Costs_Analysis. 1. Copy Dynamic Costs Sample Report ( Copy From This ). 1. When prompted, choose the new data source you just created. 1. Edit the table to include or exclude columns as desired. 1. Or, give the dashboard connection intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'dcm_account': '',
'auth_read': 'user', # Credentials used for reading data.
'configuration_sheet_url': '',
'auth_write': 'service', # Credentials used for writing data.
'bigquery_dataset': 'dynamic_costs',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Dynamic Costs ReportingThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import commandline_parser
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dynamic_costs': {
'auth': 'user',
'account': {'field': {'name': 'dcm_account','kind': 'string','order': 0,'default': ''}},
'sheet': {
'template': {
'url': 'https://docs.google.com/spreadsheets/d/19J-Hjln2wd1E0aeG3JDgKQN9TVGRLWxIEUQSmmQetJc/edit?usp=sharing',
'tab': 'Dynamic Costs',
'range': 'A1'
},
'url': {'field': {'name': 'configuration_sheet_url','kind': 'string','order': 1,'default': ''}},
'tab': 'Dynamic Costs',
'range': 'A2:B'
},
'out': {
'auth': 'user',
'dataset': {'field': {'name': 'bigquery_dataset','kind': 'string','order': 2,'default': 'dynamic_costs'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Dynamic Costs Reporting ParametersCalculate DV360 cost at the dynamic creative combination level. 1. Add a sheet URL. This is where you will enter advertiser and campaign level details. 1. Specify the CM network ID. 1. Click run now once, and a tab called Dynamic Costs will be added to the sheet with instructions. 1. Follow the instructions on the sheet; this will be your configuration. 1. StarThinker will create two or three (depending on the case) reports in CM named Dynamic Costs - .... 1. Wait for BigQuery->UNDEFINED->UNDEFINED->Dynamic_Costs_Analysis to be created or click Run Now. 1. Copy Dynamic Costs Sample Data ( Copy From This ). 1. Click Edit Connection, and Change to BigQuery->UNDEFINED->UNDEFINED->Dynamic_Costs_Analysis. 1. Copy Dynamic Costs Sample Report ( Copy From This ). 1. When prompted, choose the new data source you just created. 1. Edit the table to include or exclude columns as desired. 1. Or, give the dashboard connection intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'dcm_account': '',
'auth_read': 'user', # Credentials used for reading data.
'configuration_sheet_url': '',
'auth_write': 'service', # Credentials used for writing data.
'bigquery_dataset': 'dynamic_costs',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Dynamic Costs ReportingThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields, json_expand_includes
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dynamic_costs': {
'auth': 'user',
'account': {'field': {'name': 'dcm_account','kind': 'string','order': 0,'default': ''}},
'sheet': {
'template': {
'url': 'https://docs.google.com/spreadsheets/d/19J-Hjln2wd1E0aeG3JDgKQN9TVGRLWxIEUQSmmQetJc/edit?usp=sharing',
'tab': 'Dynamic Costs',
'range': 'A1'
},
'url': {'field': {'name': 'configuration_sheet_url','kind': 'string','order': 1,'default': ''}},
'tab': 'Dynamic Costs',
'range': 'A2:B'
},
'out': {
'auth': 'user',
'dataset': {'field': {'name': 'bigquery_dataset','kind': 'string','order': 2,'default': 'dynamic_costs'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
json_expand_includes(TASKS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Dynamic Costs Reporting ParametersCalculate DV360 cost at the dynamic creative combination level. 1. Add a sheet URL. This is where you will enter advertiser and campaign level details. 1. Specify the CM network ID. 1. Click run now once, and a tab called Dynamic Costs will be added to the sheet with instructions. 1. Follow the instructions on the sheet; this will be your configuration. 1. StarThinker will create two or three (depending on the case) reports in CM named Dynamic Costs - .... 1. Wait for BigQuery->UNDEFINED->UNDEFINED->Dynamic_Costs_Analysis to be created or click Run Now. 1. Copy Dynamic Costs Sample Data ( Copy From This ). 1. Click Edit Connection, and Change to BigQuery->UNDEFINED->UNDEFINED->Dynamic_Costs_Analysis. 1. Copy Dynamic Costs Sample Report ( Copy From This ). 1. When prompted, choose the new data source you just created. 1. Edit the table to include or exclude columns as desired. 1. Or, give the dashboard connection intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'dcm_account': '',
'auth_read': 'user', # Credentials used for reading data.
'configuration_sheet_url': '',
'auth_write': 'service', # Credentials used for writing data.
'bigquery_dataset': 'dynamic_costs',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Dynamic Costs ReportingThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dynamic_costs': {
'auth': 'user',
'account': {'field': {'name': 'dcm_account','kind': 'string','order': 0,'default': ''}},
'sheet': {
'template': {
'url': 'https://docs.google.com/spreadsheets/d/19J-Hjln2wd1E0aeG3JDgKQN9TVGRLWxIEUQSmmQetJc/edit?usp=sharing',
'tab': 'Dynamic Costs',
'range': 'A1'
},
'url': {'field': {'name': 'configuration_sheet_url','kind': 'string','order': 1,'default': ''}},
'tab': 'Dynamic Costs',
'range': 'A2:B'
},
'out': {
'auth': 'user',
'dataset': {'field': {'name': 'bigquery_dataset','kind': 'string','order': 2,'default': 'dynamic_costs'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CLIENT CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Dynamic Costs Reporting ParametersCalculate DV360 cost at the dynamic creative combination level. 1. Add a sheet URL. This is where you will enter advertiser and campaign level details. 1. Specify the CM network ID. 1. Click run now once, and a tab called Dynamic Costs will be added to the sheet with instructions. 1. Follow the instructions on the sheet; this will be your configuration. 1. StarThinker will create two or three (depending on the case) reports in CM named Dynamic Costs - .... 1. Wait for BigQuery->->->Dynamic_Costs_Analysis to be created or click Run Now. 1. Copy Dynamic Costs Sample Data ( Copy From This ). 1. Click Edit Connection, and Change to BigQuery->->->Dynamic_Costs_Analysis. 1. Copy Dynamic Costs Sample Report ( Copy From This ). 1. When prompted, choose the new data source you just created. 1. Edit the table to include or exclude columns as desired. 1. Or, give the dashboard connection intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'dcm_account': '',
'auth_read': 'user', # Credentials used for reading data.
'configuration_sheet_url': '',
'auth_write': 'service', # Credentials used for writing data.
'bigquery_dataset': 'dynamic_costs',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Dynamic Costs ReportingThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dynamic_costs': {
'auth': 'user',
'account': {'field': {'name': 'dcm_account','kind': 'string','order': 0,'default': ''}},
'sheet': {
'template': {
'url': 'https://docs.google.com/spreadsheets/d/19J-Hjln2wd1E0aeG3JDgKQN9TVGRLWxIEUQSmmQetJc/edit?usp=sharing',
'tab': 'Dynamic Costs',
'range': 'A1'
},
'url': {'field': {'name': 'configuration_sheet_url','kind': 'string','order': 1,'default': ''}},
'tab': 'Dynamic Costs',
'range': 'A2:B'
},
'out': {
'auth': 'user',
'dataset': {'field': {'name': 'bigquery_dataset','kind': 'string','order': 2,'default': 'dynamic_costs'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
Dynamic Costs ReportingCalculate DV360 cost at the dynamic creative combination level. LicenseCopyright 2020 Google LLC,Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. DisclaimerThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.This code generated (see starthinker/scripts for possible source): - **Command**: "python starthinker_ui/manage.py colab" - **Command**: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Set ConfigurationThis code is required to initialize the project. Fill in required fields and press play.1. If the recipe uses a Google Cloud Project: - Set the configuration **project** value to the project identifier from [these instructions](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md).1. If the recipe has **auth** set to **user**: - If you have user credentials: - Set the configuration **user** value to your user credentials JSON. - If you DO NOT have user credentials: - Set the configuration **client** value to [downloaded client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md).1. If the recipe has **auth** set to **service**: - Set the configuration **service** value to [downloaded service credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_service.md).
###Code
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
###Output
_____no_output_____
###Markdown
3. Enter Dynamic Costs Reporting Recipe Parameters 1. Add a sheet URL. This is where you will enter advertiser and campaign level details. 1. Specify the CM network ID. 1. Click run now once, and a tab called Dynamic Costs will be added to the sheet with instructions. 1. Follow the instructions on the sheet; this will be your configuration. 1. StarThinker will create two or three (depending on the case) reports in CM named Dynamic Costs - .... 1. Wait for BigQuery->->->Dynamic_Costs_Analysis to be created or click Run Now. 1. Copy Dynamic Costs Sample Data ( Copy From This ). 1. Click Edit Connection, and Change to BigQuery->->->Dynamic_Costs_Analysis. 1. Copy Dynamic Costs Sample Report ( Copy From This ). 1. When prompted, choose the new data source you just created. 1. Edit the table to include or exclude columns as desired. 1. Or, give the dashboard connection intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'dcm_account': '',
'auth_read': 'user', # Credentials used for reading data.
'configuration_sheet_url': '',
'auth_write': 'service', # Credentials used for writing data.
'bigquery_dataset': 'dynamic_costs',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
4. Execute Dynamic Costs ReportingThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dynamic_costs': {
'auth': 'user',
'account': {'field': {'name': 'dcm_account', 'kind': 'string', 'order': 0, 'default': ''}},
'sheet': {
'template': {
'url': 'https://docs.google.com/spreadsheets/d/19J-Hjln2wd1E0aeG3JDgKQN9TVGRLWxIEUQSmmQetJc/edit?usp=sharing',
'tab': 'Dynamic Costs',
'range': 'A1'
},
'url': {'field': {'name': 'configuration_sheet_url', 'kind': 'string', 'order': 1, 'default': ''}},
'tab': 'Dynamic Costs',
'range': 'A2:B'
},
'out': {
'auth': 'user',
'dataset': {'field': {'name': 'bigquery_dataset', 'kind': 'string', 'order': 2, 'default': 'dynamic_costs'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Dynamic Costs Reporting ParametersCalculate DBM cost at the dynamic creative combination level. 1. Add a sheet URL. This is where you will enter advertiser and campaign level details. 1. Specify the DCM network ID. 1. Click run now once, and a tab called Dynamic Costs will be added to the sheet with instructions. 1. Follow the instructions on the sheet; this will be your configuration. 1. StarThinker will create two or three (depending on the case) reports in DCM named Dynamic Costs - .... 1. Wait for BigQuery->UNDEFINED->UNDEFINED->Dynamic_Costs_Analysis to be created or click Run Now. 1. Copy Dynamic Costs Sample Data ( Copy From This ). 1. Click Edit Connection, and Change to BigQuery->UNDEFINED->UNDEFINED->Dynamic_Costs_Analysis. 1. Copy Dynamic Costs Sample Report ( Copy From This ). 1. When prompted, choose the new data source you just created. 1. Edit the table to include or exclude columns as desired. 1. Or, give the dashboard connection intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'dcm_account': '',
'configuration_sheet_url': '',
'bigquery_dataset': 'dynamic_costs',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Dynamic Costs ReportingThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dynamic_costs': {
'auth': 'user',
'account': {'field': {'name': 'dcm_account','kind': 'string','order': 0,'default': ''}},
'sheet': {
'template': {
'url': 'https://docs.google.com/spreadsheets/d/19J-Hjln2wd1E0aeG3JDgKQN9TVGRLWxIEUQSmmQetJc/edit?usp=sharing',
'tab': 'Dynamic Costs',
'range': 'A1'
},
'url': {'field': {'name': 'configuration_sheet_url','kind': 'string','order': 1,'default': ''}},
'tab': 'Dynamic Costs',
'range': 'A2:B'
},
'out': {
'auth': 'user',
'dataset': {'field': {'name': 'bigquery_dataset','kind': 'string','order': 2,'default': 'dynamic_costs'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Dynamic Costs Reporting ParametersCalculate DV360 cost at the dynamic creative combination level. 1. Add a sheet URL. This is where you will enter advertiser and campaign level details. 1. Specify the CM network ID. 1. Click run now once, and a tab called Dynamic Costs will be added to the sheet with instructions. 1. Follow the instructions on the sheet; this will be your configuration. 1. StarThinker will create two or three (depending on the case) reports in CM named Dynamic Costs - .... 1. Wait for BigQuery->->->Dynamic_Costs_Analysis to be created or click Run Now. 1. Copy Dynamic Costs Sample Data ( Copy From This ). 1. Click Edit Connection, and Change to BigQuery->->->Dynamic_Costs_Analysis. 1. Copy Dynamic Costs Sample Report ( Copy From This ). 1. When prompted, choose the new data source you just created. 1. Edit the table to include or exclude columns as desired. 1. Or, give the dashboard connection intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'dcm_account': '',
'auth_read': 'user', # Credentials used for reading data.
'configuration_sheet_url': '',
'auth_write': 'service', # Credentials used for writing data.
'bigquery_dataset': 'dynamic_costs',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Dynamic Costs ReportingThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dynamic_costs': {
'auth': 'user',
'account': {'field': {'name': 'dcm_account','kind': 'string','order': 0,'default': ''}},
'sheet': {
'template': {
'url': 'https://docs.google.com/spreadsheets/d/19J-Hjln2wd1E0aeG3JDgKQN9TVGRLWxIEUQSmmQetJc/edit?usp=sharing',
'tab': 'Dynamic Costs',
'range': 'A1'
},
'url': {'field': {'name': 'configuration_sheet_url','kind': 'string','order': 1,'default': ''}},
'tab': 'Dynamic Costs',
'range': 'A2:B'
},
'out': {
'auth': 'user',
'dataset': {'field': {'name': 'bigquery_dataset','kind': 'string','order': 2,'default': 'dynamic_costs'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Dynamic Costs Reporting ParametersCalculate DV360 cost at the dynamic creative combination level. 1. Add a sheet URL. This is where you will enter advertiser and campaign level details. 1. Specify the CM network ID. 1. Click run now once, and a tab called Dynamic Costs will be added to the sheet with instructions. 1. Follow the instructions on the sheet; this will be your configuration. 1. StarThinker will create two or three (depending on the case) reports in CM named Dynamic Costs - .... 1. Wait for BigQuery->UNDEFINED->UNDEFINED->Dynamic_Costs_Analysis to be created or click Run Now. 1. Copy Dynamic Costs Sample Data ( Copy From This ). 1. Click Edit Connection, and Change to BigQuery->UNDEFINED->UNDEFINED->Dynamic_Costs_Analysis. 1. Copy Dynamic Costs Sample Report ( Copy From This ). 1. When prompted, choose the new data source you just created. 1. Edit the table to include or exclude columns as desired. 1. Or, give the dashboard connection intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'dcm_account': '',
'configuration_sheet_url': '',
'bigquery_dataset': 'dynamic_costs',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Dynamic Costs ReportingThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dynamic_costs': {
'auth': 'user',
'account': {'field': {'name': 'dcm_account','kind': 'string','order': 0,'default': ''}},
'sheet': {
'template': {
'url': 'https://docs.google.com/spreadsheets/d/19J-Hjln2wd1E0aeG3JDgKQN9TVGRLWxIEUQSmmQetJc/edit?usp=sharing',
'tab': 'Dynamic Costs',
'range': 'A1'
},
'url': {'field': {'name': 'configuration_sheet_url','kind': 'string','order': 1,'default': ''}},
'tab': 'Dynamic Costs',
'range': 'A2:B'
},
'out': {
'auth': 'user',
'dataset': {'field': {'name': 'bigquery_dataset','kind': 'string','order': 2,'default': 'dynamic_costs'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Dynamic Costs Reporting ParametersCalculate DV360 cost at the dynamic creative combination level. 1. Add a sheet URL. This is where you will enter advertiser and campaign level details. 1. Specify the CM network ID. 1. Click run now once, and a tab called Dynamic Costs will be added to the sheet with instructions. 1. Follow the instructions on the sheet; this will be your configuration. 1. StarThinker will create two or three (depending on the case) reports in CM named Dynamic Costs - .... 1. Wait for BigQuery->UNDEFINED->UNDEFINED->Dynamic_Costs_Analysis to be created or click Run Now. 1. Copy Dynamic Costs Sample Data ( Copy From This ). 1. Click Edit Connection, and Change to BigQuery->UNDEFINED->UNDEFINED->Dynamic_Costs_Analysis. 1. Copy Dynamic Costs Sample Report ( Copy From This ). 1. When prompted, choose the new data source you just created. 1. Edit the table to include or exclude columns as desired. 1. Or, give the dashboard connection intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'dcm_account': '',
'auth_read': 'user', # Credentials used for reading data.
'configuration_sheet_url': '',
'auth_write': 'service', # Credentials used for writing data.
'bigquery_dataset': 'dynamic_costs',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Dynamic Costs ReportingThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dynamic_costs': {
'auth': 'user',
'account': {'field': {'name': 'dcm_account','kind': 'string','order': 0,'default': ''}},
'sheet': {
'template': {
'url': 'https://docs.google.com/spreadsheets/d/19J-Hjln2wd1E0aeG3JDgKQN9TVGRLWxIEUQSmmQetJc/edit?usp=sharing',
'tab': 'Dynamic Costs',
'range': 'A1'
},
'url': {'field': {'name': 'configuration_sheet_url','kind': 'string','order': 1,'default': ''}},
'tab': 'Dynamic Costs',
'range': 'A2:B'
},
'out': {
'auth': 'user',
'dataset': {'field': {'name': 'bigquery_dataset','kind': 'string','order': 2,'default': 'dynamic_costs'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Dynamic Costs Reporting ParametersCalculate DV360 cost at the dynamic creative combination level. 1. Add a sheet URL. This is where you will enter advertiser and campaign level details. 1. Specify the CM network ID. 1. Click run now once, and a tab called Dynamic Costs will be added to the sheet with instructions. 1. Follow the instructions on the sheet; this will be your configuration. 1. StarThinker will create two or three (depending on the case) reports in CM named Dynamic Costs - .... 1. Wait for BigQuery->UNDEFINED->UNDEFINED->Dynamic_Costs_Analysis to be created or click Run Now. 1. Copy Dynamic Costs Sample Data ( Copy From This ). 1. Click Edit Connection, and Change to BigQuery->UNDEFINED->UNDEFINED->Dynamic_Costs_Analysis. 1. Copy Dynamic Costs Sample Report ( Copy From This ). 1. When prompted, choose the new data source you just created. 1. Edit the table to include or exclude columns as desired. 1. Or, give the dashboard connection intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'dcm_account': '',
'configuration_sheet_url': '',
'auth_write': 'service', # Credentials used for writing data.
'auth_read': 'user', # Credentials used for reading data.
'bigquery_dataset': 'dynamic_costs',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Dynamic Costs ReportingThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dynamic_costs': {
'auth': 'user',
'sheet': {
'url': {'field': {'kind': 'string','name': 'configuration_sheet_url','order': 1,'default': ''}},
'tab': 'Dynamic Costs',
'template': {
'url': 'https://docs.google.com/spreadsheets/d/19J-Hjln2wd1E0aeG3JDgKQN9TVGRLWxIEUQSmmQetJc/edit?usp=sharing',
'tab': 'Dynamic Costs',
'range': 'A1'
},
'range': 'A2:B'
},
'out': {
'auth': 'user',
'dataset': {'field': {'kind': 'string','name': 'bigquery_dataset','order': 2,'default': 'dynamic_costs'}}
},
'account': {'field': {'kind': 'string','name': 'dcm_account','order': 0,'default': ''}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____ |
app/notebooks/problang/transcript_labeling.ipynb | ###Markdown
Table of Contents
###Code
from esper.prelude import *
from transcript_utils import *
import random
dataset = SegmentTextDataset(video_list())
initial_segments = pcache.get('initial_segments')
all_segments = set(range(len(dataset)))
likely_positive = set([dataset.segment_index(s['item_name'], s['segment']) for s in initial_segments])
likely_negative = all_segments - likely_positive
pos_idx = list(likely_positive)
neg_idx = list(likely_negative)
random.shuffle(pos_idx)
random.shuffle(neg_idx)
N = 3
def ngrams(segment, n):
return [' '.join(segment[i:i+n]) for i in range(0, len(segment)+1-n)]
def label_widget(likely_positive, likely_negative, lexicon=[]):
lex_set = set([ngram for [ngram, _] in lexicon])
labels = []
i = 0
pos_idx = list(likely_positive)
neg_idx = list(likely_negative)
random.shuffle(pos_idx)
random.shuffle(neg_idx)
transcript = HTML(dataset[pos_idx[0]]['segment'])
box = Text(placeholder='y/n')
def on_submit(text):
nonlocal i
label = 1 if text.value == 'y' else 0
(cur_source, next_source) = (pos_idx, neg_idx) if i % 2 == 0 else (neg_idx, pos_idx)
labels.append((cur_source[i//2], label))
i += 1
transcript.value = dataset[next_source[i//2]]['segment']
box.value = ''
box.on_submit(on_submit)
display(transcript)
display(box)
return labels
labels = label_widget(likely_positive, likely_negative)
print('Labels: {}'.format(len(labels)))
pcache.set('labeled_segments', labels)
labels
###Output
_____no_output_____ |
blog_notebooks/cyber/flow_classification/flow_classification_rapids.ipynb | ###Markdown
Cyber Use Case Tutorial: Multiclass Classification on IoT Flow Data with XGBoost Goals:- Learn the basics of cyber network data with respect to consumer IoT devices- Load network data into a cuDF- Explore network data and features- Use XGBoost to build a classification model- Evaluate the model To get started, we'll make sure the data is available and in the expected location. If you already have the data on your machine, change the `DATA_PATH` location to point to the appropriate location.
###Code
import os
import urllib.request
# specify the location of the data files
DATA_PATH = '../../../data/unswiot/'
if not os.path.exists(DATA_PATH):
print('creating unswiot data directory')
os.system('mkdir ../../../data/unswiot')
base_url = 'https://s3.us-east-2.amazonaws.com/rapidsai-data/datasets/unsw_iot/'
fn = 'unswiotflow.tar.gz'
if not os.path.isfile(DATA_PATH+fn):
print(f'Downloading {base_url+fn} to {DATA_PATH+fn}')
urllib.request.urlretrieve(base_url+fn, DATA_PATH+fn)
import tarfile
tar = tarfile.open(DATA_PATH+fn, "r:gz")
for tarinfo in tar:
print(tarinfo.name, "is", tarinfo.size, "bytes in size and is", end="")
if tarinfo.isreg():
print(" a regular file.")
elif tarinfo.isdir():
print(" a directory.")
else:
print(" something else.")
tar.extractall(DATA_PATH)
tar.close()
# the sample PCAP file used for explanation
DATA_PCAP = DATA_PATH + "small_sample.pcap"
# the flow connection log (conn.log) file
DATA_SOURCE = DATA_PATH + "conn.log"
# the data label file (matches IP addresses with MAC addresses)
DATA_LABELS = DATA_PATH + "lab_mac_labels_cats.csv"
###Output
_____no_output_____
###Markdown
Background The Internet of Things and Data at a Massive ScaleGartner estimates there are currently over 8.4 billion Internet of Things (IoT) devices. By 2020, that number is [estimated to surpass 20 billion](https://www.zdnet.com/article/iot-devices-will-outnumber-the-worlds-population-this-year-for-the-first-time/). These types of devices range from consumer devices (e.g., Amazon Echo, smart TVs, smart cameras, door bells) to commercial devices (e.g., building automation systems, keycard entry). All of these devices exhibit behavior on the Internet as they communicate back with their own clouds and user-specified integrations. Types of Network DataThe most detailed type of data that is typically collected on a network is full Packet CAPture (PCAP) data. This information is detailed and contains everything about the communication, including: source address, destination address, protocols used, bytes transferred, and even the raw data (e.g., image, audio file, executable). PCAP data is fine-grained, meaning that there is a record for each frame being transmitted. A typical communication is composed of many individual packets/frames.If we aggregate PCAP data so that there is one row of data per communication session, we call that flow level data. A simplified example of this relationship is shown in the figure below.For this tutorial, we use data from the University of New South Wales. In a lab environment, they [collected nearly three weeks of IoT data from 21 IoT devices](http://149.171.189.1). They also kept a detailed [list of devices by MAC address](http://149.171.189.1/resources/List_Of_Devices.txt), so we have ground-truth with respect to each IoT device's behavior on the network.**Our goal is to utilize the behavior exhibited in the network data to classify IoT devices.** Data Investigation Let's first see some of the data. We'll load a PCAP file in using Scapy. If you don't want to or can't install Scapy, feel free to skip this section.
###Code
!pip install -q scapy
from scapy.all import *
cap = rdpcap(DATA_PCAP)
eth_frame = cap[3]
ip_pkt = eth_frame.payload
segment = ip_pkt.payload
data = segment.payload
eth_frame.show()
###Output
_____no_output_____
###Markdown
There's really a lot of features there. In addition to having multiple layers (which may differ between packets), there are a number of other issues with working directly with PCAP. Often the payload (the `Raw` section above) is encrypted, rendering it useless. The lack of aggregation also makes it difficult to differentiate between packets. What we really care about for this application is what a *session* looks like. In other words, how a Roku interacts with the network is likely quite different than how a Google Home interacts. To save time for the tutorial, all three weeks of PCAP data have already been transformed to flow data, and we can load that in to a typical Pandas dataframe. Due to how the data was created, we have a header row (with column names) as well as a footer row. We've already removed those rows, so nothing to do here.For this application, we used [Zeek](https://www.zeek.org) (formerly known as Bro) to construct the flow data. To include MAC addresses in the conn log, we used the [mac-logging.zeek script](https://github.com/bro/bro/blob/master/scripts/policy/protocols/conn/mac-logging.zeek).If you've skipped installing Scapy, you can pick up here.
###Code
import cudf as cd
import pandas as pd
import nvstrings
from collections import OrderedDict
%%time
pdf = pd.read_csv(DATA_SOURCE, sep='\t')
print("==> pdf shape: ",pdf.shape)
###Output
_____no_output_____
###Markdown
We can look at what this new aggregated data looks like, and get a better sense of the columns and their data types. Let's do this the way we're familiar with, using Pandas.
###Code
pdf.head()
pdf.dtypes
###Output
_____no_output_____
###Markdown
That's Pandas, and we could continue the analysis there if we wanted. But what about [cuDF](https://github.com/rapidsai/cudf)? Let's pivot to that for the majority of this tutorial.One thing cuDF neeeds is for us to specify the data types. We'll write a function to make this easier. As of version 0.6, [strings are supported in cuDF](https://rapidsai.github.io/projects/cudf/en/latest/10min.html?highlight=stringString-Methods). We'll make use of that here.
###Code
def get_dtypes(fn, delim, floats, strings):
with open(fn, errors='replace') as fp:
header = fp.readline().strip()
types = []
for col in header.split(delim):
if 'date' in col:
types.append((col, 'date'))
elif col in floats:
types.append((col, 'float64'))
elif col in strings:
types.append((col, 'str'))
else:
types.append((col, 'int64'))
return OrderedDict(types)
dtypes_data_processed = get_dtypes(DATA_SOURCE, '\t', floats=['ts','duration'],
strings=['uid','id.orig_h','id.resp_h','proto','service',
'conn_state','local_orig','local_resp',
'history','tunnel_parents','orig_l2_addr',
'resp_l2_addr'])
%%time
raw_cdf = cd.io.csv.read_csv(DATA_SOURCE, delimiter='\t', names=list(dtypes_data_processed),
dtype=list(dtypes_data_processed.values()), skiprows=1)
dtypes_data_processed
###Output
_____no_output_____
###Markdown
Those data types seem right. Let's see what this data looks like now that it's in cuDF.
###Code
print(raw_cdf.head())
###Output
_____no_output_____
###Markdown
Adding ground truth labels back to the data We'll need some labels for our classification task, so we've already prepared a file with those labels.
###Code
dtypes_labels_processed = get_dtypes(DATA_LABELS, ',', floats=[],
strings=['device','mac','connection','category'])
labels_cdf = cd.io.csv.read_csv(DATA_LABELS, delimiter=',', names=list(dtypes_labels_processed),
dtype=list(dtypes_labels_processed.values()), skiprows=1)
print(labels_cdf.head())
dtypes_labels_processed
###Output
_____no_output_____
###Markdown
We now perform a series of merges to add the ground truth data (device name, connection, category, and categoryID) back to the dataset. Since each row of netflow has two participants, we'll have to do this twice - once for the originator (source) and once for the responder (destination).
###Code
%%time
labels_cdf.columns = ['orig_device','orig_l2_addr','orig_connection','orig_category','orig_category_id']
merged_cdf = cd.merge(raw_cdf, labels_cdf, how='left', on='orig_l2_addr')
labels_cdf.columns = ['resp_device','resp_l2_addr','resp_connection','resp_category','resp_category_id']
merged_cdf = cd.merge(merged_cdf, labels_cdf, how='left')
###Output
_____no_output_____
###Markdown
Let's reset the `labels_cdf` column names for our own sanity.
###Code
labels_cdf.columns = ['device','mac','connection','category','category_id']
###Output
_____no_output_____
###Markdown
Let's just look at our new dataset to make sure everything's okay.
###Code
print(merged_cdf.head())
merged_cdf.dtypes
###Output
_____no_output_____
###Markdown
Exploding the Netflow Data into Originator and Responder Rows We now have netflow that has one row per (sessionized) communication between an originator and responder. However, in order to classify an individual device, we need to explode data. Instead of one row that contains both originator and responder, we'll explode to one row for originator information (orig_bytes, orig_pkts, orig_ip_bytes) and one for responder information (resp_bytes, resp_pkts, resp_ip_bytes).The easiest way to do this is to create two new dataframes, rename all of the columns, then `concat` them back together. Just for sanity, we'll also check the new shape of our exploded data frame.
###Code
orig_comms_cdf = merged_cdf[['ts','id.orig_h','id.orig_p','proto','service','duration',
'orig_bytes','orig_pkts','orig_ip_bytes','orig_device',
'orig_l2_addr','orig_category','orig_category_id']]
orig_comms_cdf.columns = ['ts','ip','port','proto','service','duration','bytes','pkts',
'ip_bytes','device','mac','category','category_id']
resp_comms_cdf = merged_cdf[['ts','id.resp_h','id.resp_p','proto','service','duration',
'resp_bytes','resp_pkts','resp_ip_bytes','resp_device',
'resp_l2_addr','resp_category','resp_category_id']]
resp_comms_cdf.columns = ['ts','ip','port','proto','service','duration','bytes','pkts',
'ip_bytes','device','mac','category','category_id']
exploded_cdf = cd.concat([orig_comms_cdf, resp_comms_cdf])
print("==> shape (original) =", merged_cdf.shape)
print("==> shape =", exploded_cdf.shape)
###Output
_____no_output_____
###Markdown
We're going to need the number of categories (classes) quite a bit, so we'll make a variable for it for easier access. For this tutorial using the data originally presented, we should have 13 categories.
###Code
num_categories = labels_cdf['category_id'].unique().shape[0]
print("==> number of IoT categories =", num_categories)
###Output
_____no_output_____
###Markdown
We currently need to remove null values before we proceed. Although `dropna` doesn't exist in cuDF yet, we can use a workaround to get us there. Also, due to what's available currently, we can't have any nulls in any place in the DF.
###Code
for col in exploded_cdf.columns:
print(col, exploded_cdf[col].null_count)
exploded_cdf['category_id'] = exploded_cdf['category_id'].fillna(-999)
exploded_cdf['device'] = exploded_cdf['device'].str.fillna("none")
exploded_cdf['category'] = exploded_cdf['category'].str.fillna("none")
for col in exploded_cdf.columns:
print(col, exploded_cdf[col].null_count)
###Output
_____no_output_____
###Markdown
Looks like all the null values are gone, so now we can proceed. If an IP doesn't have a category ID, we can't use it. So we'll filter those out.
###Code
exploded_cdf = exploded_cdf[exploded_cdf['category_id'] != -999]
exploded_cdf.shape
###Output
_____no_output_____
###Markdown
Binning the Data and Aggregating the Features But wait, there's still more data wrangling to be done! While we've exploded the flows into rows for orig/resp, we may want to bin the data further by time. The rationale is that any single communication may not be an accurate representation of how a device typically reacts in its environment. Imagine the simple case of how a streaming camera typically operates (most of its data will be uploaded from the device to a destination) versus how it operates during a firmware update (most of the data will be pushed down to the device, after which a brief disruption in connectivity will occur).There's a lof ot different time binning we could do. It also would be useful to investigate what the average duration of connection is relative to how many connections per time across various time granularities. With that said, we'll just choose a time bin of 1 hour to begin with. In order to bin, we'll use the following formula:$$\text{hour_time_bin}=\left\lfloor{\frac{ts}{60*60}}\right\rfloor$$
###Code
import math
exploded_cdf['hour_time_bin'] = exploded_cdf['ts'].applymap(lambda x: math.floor(x/(60*60))).astype(int)
###Output
_____no_output_____
###Markdown
We also have to make a choice about how we'll aggregate the binned data. One of the simplest ways is to sum the bytes and packets. There are really two choices for bytes, `bytes` and `ip_bytes`. With Bro, `bytes` is taken from the TCP sequence numbers and is potentially inaccurate, so we select `ip_bytes` instead for both originator and responder. We'll also use the sum of the number of packets.
###Code
one_hour_time_bin_cdf = (exploded_cdf[['bytes','pkts','ip_bytes','mac','category_id','hour_time_bin']]
.groupby(['mac','category_id','hour_time_bin'])
.agg({'bytes':'sum',
'pkts':'sum',
'ip_bytes':'sum'})
)
one_hour_time_bin_cdf.head()
###Output
_____no_output_____
###Markdown
Creating the Training and Testing Datasets We'll take a tradition 70/30 train/test split, and we'll randomly sample into a train and test data frame.
###Code
import numpy as np
cdf_msk = np.random.rand(len(one_hour_time_bin_cdf)) < 0.7
train_mask = cd.Series(cdf_msk)
test_mask = ~train_mask
train_cdf = one_hour_time_bin_cdf[cd.Series(cdf_msk)]
test_cdf = one_hour_time_bin_cdf[~cdf_msk]
print("==> train length =",len(train_cdf))
print("==> test length =",len(test_cdf))
###Output
_____no_output_____
###Markdown
Prepare the training input (`train_X`), training target (`train_Y`), test input (`test_X`) and test target (`test_Y`) datasets.
###Code
train_X = train_cdf[['pkts','ip_bytes']]
train_Y = train_cdf.index.get_level_values(1)
test_X = test_cdf[['pkts','ip_bytes']]
test_Y = test_cdf.index.get_level_values(1)
###Output
_____no_output_____
###Markdown
Now we just look at the head of both of these datasets (just a quick sanity check).
###Code
print(train_X.head())
print(train_Y.head())
###Output
_____no_output_____
###Markdown
Configure XGBoost We choose a classification algorithm that utilizes the GPU - [XGBoost](https://xgboost.readthedocs.io/en/latest/). The package provides support for gradient boosted trees and can leverage distributed GPU compute environments.
###Code
import xgboost as xgb
###Output
_____no_output_____
###Markdown
Getting data into a format for XGBoost is really easy. Just make a `DMatrix` for both training and testin.
###Code
xg_train = xgb.DMatrix(train_X, label=train_Y)
xg_test = xgb.DMatrix(test_X, label=test_Y)
###Output
_____no_output_____
###Markdown
Like any good ML package, there's quite a few parameters to set. We're going to start with the softmax objective function. This will let us get a predicted category out of our model. We'll also set other parameters like the maximum depth and number of threads. You can read more about the parameters [here](https://xgboost.readthedocs.io/en/latest/parameter.html). Experiment with them!
###Code
param = {}
param['objective'] = 'multi:softmax'
param['eta'] = 0.1
param['max_depth'] = 8
param['silent'] = 1
param['nthread'] = 4
param['num_class'] = num_categories
param['max_features'] = 'auto'
param['n_gpus'] = 1
param['tree_method'] = 'gpu_hist'
# param
###Output
_____no_output_____
###Markdown
XGBoost allows us to define a watchlist so what we can keep track of performance as the algorithm trains. We'll configure a simple watchlist that is watching `xg_train` and `xg_gest` error rates.
###Code
watchlist = [(xg_train, 'train'), (xg_test, 'test')]
num_round = 20
###Output
_____no_output_____
###Markdown
Training our First XGBoost Model Now it's time to train
###Code
bst = xgb.train(param, xg_train, num_round, watchlist)
###Output
_____no_output_____
###Markdown
Prediction is also easy (and fast).
###Code
pred = bst.predict(xg_test)
###Output
_____no_output_____
###Markdown
We might want to get a sense of how our model is by calculating the error rate.
###Code
pred_cdf = cd.from_pandas(pd.DataFrame(pred, columns=['pred']))
pred_cdf.add_column('category_id',test_Y['category_id'])
error_rate = (pred_cdf[pred_cdf['pred'] != pred_cdf['category_id']]['pred'].count()) / test_Y.shape[0]
error_rate
###Output
_____no_output_____
###Markdown
That's not great, but it's not terrible considering we made quite a few seemingly abritrary decisions in both the feature selection and aggregation phases. Maybe we want to get some more insight into how our model is performing by analyzing the ROC curves for each class, micro average, and macro average. We'll revert back to traditional Python data science tools to do this analysis. Analyzing the Model's Performance We'll start by importing some packages we'll need to perform this analysis. For simplicity in an already large notebook, we'll put them in a single cell.
###Code
# sklearn is used to binarize the labels as well as calculate ROC and AUC
from sklearn.metrics import roc_curve, auc,recall_score,precision_score
from sklearn.preprocessing import label_binarize
# scipy is used for interpolating the ROC curves
from scipy import interp
# our old friend matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
# choose whatever style you want
plt.style.use('fivethirtyeight')
# cycle is used just to make different colors for the different ROC curves
from itertools import cycle
###Output
_____no_output_____
###Markdown
A ROC curve analysis can be trickey for multiclass problems. One way to deal with it is to look at the ROC curve for each class. We'll take some steps to format our data so that it plays nicely with input requirements from sklearn (ah 80/20 rule, we meet again). We also will need to rerun our model with a different objective function. Rerunning the Model with the `softprob` Objective Function We used the `softmax` objective function above, but what we really want out of model this time is probabilities that a netflow communication belongs to each of the classes. This is easy enough to do with XGBoost, as we just change the objective function to `softprob`. For simplicity, all of the configuration is in a single cell below rather than spread out. Note the only difference is the objective function change.
###Code
cdf_msk = np.random.rand(len(one_hour_time_bin_cdf)) < 0.7
train_cdf = one_hour_time_bin_cdf[cdf_msk]
test_cdf = one_hour_time_bin_cdf[~cdf_msk]
train_X = train_cdf[['pkts','ip_bytes']]
train_Y = train_cdf[['category_id']]
test_X = test_cdf[['pkts','ip_bytes']]
test_Y = test_cdf[['category_id']]
xg_train = xgb.DMatrix(train_X, label=train_Y)
xg_test = xgb.DMatrix(test_X, label=test_Y)
param = {}
param['objective'] = 'multi:softprob'
param['eta'] = 0.1
param['max_depth'] = 8
param['silent'] = 1
param['nthread'] = 4
param['num_class'] = num_categories
param['n_gpus'] = 1
param['tree_method'] = 'gpu_hist'
watchlist = [(xg_train, 'train'), (xg_test, 'test')]
num_round = 20
###Output
_____no_output_____
###Markdown
Train the model.
###Code
bst = xgb.train(param, xg_train, num_round, watchlist)
###Output
_____no_output_____
###Markdown
Okay, so we have our new model. We now take some steps to make sure the data is in a format that makes sklearn happy. First we'll use the `predict` function to compute the probabilities. To extend `roc_curve` to multiclass, we'll also need to binarize the labels. Let's keep our sanity by also making sure the lengths match.
###Code
len(bst.predict(xg_test))
probs = bst.predict(xg_test).reshape(test_Y.shape[0],param['num_class'])
###Output
_____no_output_____
###Markdown
For now, we need to convert the `test_Y` cuDF to an array. The most straightforward way to do that is to go through Pandas. It also lets us show off how nicely we can convert to Pandas, should the need arise.
###Code
test_Y_binarize = label_binarize(test_Y.to_pandas()['category_id'].values, classes=np.arange(param['num_class']))
print("==> length of probs =",len(probs))
print("==> length of test_Y_binarize =", len(test_Y_binarize))
###Output
_____no_output_____
###Markdown
Some more housekeeping. We'll create Python dictionaries to hold FPR ([false positive rate](https://en.wikipedia.org/wiki/False_positive_rate)), TPR ([true positive rate](https://en.wikipedia.org/wiki/Sensitivity_and_specificity)), and AUC ([area under the curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristicArea_under_the_curve)) values.
###Code
fpr = dict()
tpr = dict()
roc_auc = dict()
###Output
_____no_output_____
###Markdown
For each of our classes, we'll computer FPR, TPR, and AUC. We're also compute the [micro and macro averages](http://rushdishams.blogspot.com/2011/08/micro-and-macro-average-of-precision.html).
###Code
print("==> number of classes =", num_categories)
# calculate FPR, TPR, and ROC AUC for every class
for i in range(num_categories):
fpr[i], tpr[i], _ = roc_curve(test_Y_binarize[:, i], probs[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# calculate the micro average FPR, TPR, and ROC AUC (we'll calculate the macro average below)
fpr["micro"], tpr["micro"], _ = roc_curve(test_Y_binarize.ravel(), probs.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
###Output
_____no_output_____
###Markdown
Plotting the ROC Curves Phew! Lots of code below, but it's fairly straightforward and [adapted from an example in the scikit-learn documentation](http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.htmlmulticlass-settings). Before we plot though, we'll create a simple category lookup dictionary so we can label the classes with their actual names (not their category IDs).
###Code
labels_pdf = labels_cdf.to_pandas()
category_lookup = labels_pdf[['category','category_id']].drop_duplicates().set_index('category_id').T.to_dict()
# aggregate all of the false positive rates across all classes
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(num_categories)]))
# interpolate all of the ROC curves
mean_tpr = np.zeros_like(all_fpr)
for i in range(param['num_class']):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# average the TPR
mean_tpr /= num_categories
# compute the macro average FPR, TPR, and ROC AUC
fpr['macro'] = all_fpr
tpr['macro'] = mean_tpr
roc_auc['macro'] = auc(fpr['macro'], tpr['macro'])
# plot all of the ROC curves on a single plot (for comparison)
plt.figure(figsize=(9,9))
plt.plot(fpr['micro'], tpr['micro'],
label="micro-average ROC curve (area = {0:0.2f})"
"".format(roc_auc['micro']),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr['macro'], tpr['macro'],
label="macro-average ROC curve (area = {0:0.2f})"
"".format(roc_auc['macro']),
color='navy', linestyle=':', linewidth=4)
num_colors = param['num_class']
cm = plt.get_cmap('gist_rainbow')
colors = cycle([cm(1.*i/num_colors) for i in range(num_colors)])
lw = 2
for i, color in zip(range(param['num_class']), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label="ROC curve for "+category_lookup[i]['category']+" class (area = {1:0.2f})"
"".format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate", fontsize=12)
plt.ylabel("True Positive Rate", fontsize=12)
plt.title("ROC Curves for IoT Device Categories")
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
It's not a *terrible* plot, but it gets a little messy. We can also plot each class as its own subplot.First we make a few variables so we can control the layout.
###Code
total_subplots = num_categories
plot_grid_cols = 3
plot_grid_rows = total_subplots // plot_grid_cols
plot_grid_rows += total_subplots % plot_grid_cols
position_index = range(1, total_subplots+1)
###Output
_____no_output_____
###Markdown
Now we make the grid of plots.
###Code
plt.figure()
fig, axs = plt.subplots(plot_grid_rows, plot_grid_cols, sharex=True, sharey=True, figsize=(15,15))
lw = 2
plt_num = 0
for row in range(plot_grid_rows):
for col in range(plot_grid_cols):
if(plt_num <= 12):
axs[row,col].plot(fpr[plt_num], tpr[plt_num], lw=lw)
axs[row,col].set_title(category_lookup[plt_num]['category']+' Devices ROC Curve', fontsize=14)
axs[row,col].text(0.7, 0.1,"AUC = {:.4f}".format(roc_auc[plt_num]), size=11)
elif(plt_num == 13):
axs[row,col].plot(fpr['micro'], tpr['micro'], lw=lw)
axs[row,col].set_title("Micro Average ROC Curve", fontsize=14)
axs[row,col].text(0.7, 0.1,"AUC = {:.4f}".format(roc_auc['micro']), size=12)
elif(plt_num == 14):
axs[row,col].plot(fpr['macro'], tpr['macro'], lw=lw)
axs[row,col].set_title("Macro Average ROC Curve", fontsize=14)
axs[row,col].text(0.7, 0.1,"AUC = {:.4f}".format(roc_auc['macro']), size=12)
axs[row,col].set_xlabel('False Positive Rate', fontsize=10)
axs[row,col].set_ylabel('True Positive Rate', fontsize=10)
plt_num += 1
plt.xlim([-0.01, 1.0])
plt.ylim([0.0, 1.05])
plt.subplots_adjust(wspace=0.2, hspace=0.4)
plt.show()
###Output
_____no_output_____
###Markdown
Conclusions As we've shown, it's possible to get fairly decent multiclass classification results for IoT data using only basic features (bytes and packets) when aggregated. This isn't surprising, based on the fact that we used expert knowledge to assign category labels. In addition, the majority of the time, IoT devices are in a "steady state" (idle), and are not heavily influenced by human interaction. This lets us take larger samples (e.g., aggregate to longer time bins) while still maintaining decent classification performance. It should also be noted that this is a very clean dataset. The traffic is mainly IoT traffic (e.g., little traditional compute traffic), and there are no intentional abnormal activities injected (e.g., red teaming).We used Bro data, but it's also possible to use the raw PCAP data as input for classification. The preprocessing steps are more arduous than for flow data though. It'd be a great exercise... More to Explore: Possible Exercises (1) It may be useful to investigate other time binnings. Can you build another model that uses data binned to a different granularity (e.g., 5 minutes)?
###Code
# your work here
###Output
_____no_output_____
###Markdown
(2) We used the `sum` of bytes and packets for a device when aggregated to the hour. What about other ways to handle these quantitative features (e.g., average)? Would that improve the classification results?
###Code
# your work here
###Output
_____no_output_____
###Markdown
(3) We selected specific parameters for XGBoost. These could probably use a bit more thought. You can [read more about the parameters](https://xgboost.readthedocs.io/en/latest/parameter.html) and try adjusting them on our previous dataset.
###Code
# a reminder about our parameters
print(param)
# your work here
###Output
_____no_output_____
###Markdown
(4) There are additional features in the netflow data that we didn't use. Some other quantitative fields (e.g., duration) and categorical fields (e.g., protocol, service, ports) may be useful for classification. Build another XGBoost model using some/all of these fields.
###Code
# your work here
###Output
_____no_output_____
###Markdown
Cyber Use Case Tutorial: Multiclass Classification on IoT Flow Data with XGBoost Goals:- Learn the basics of cyber network data with respect to consumer IoT devices- Load network data into a cuDF- Explore network data and features- Use XGBoost to build a classification model- Evaluate the model To get started, we'll make sure the data is available and in the expected location. If you already have the data on your machine, change the `DATA_PATH` location to point to the appropriate location.
###Code
!mkdir -p ../../../data/input/unswiot
!if [ ! -f ../../../data/input/unswiot/conn.log ]; then tar -xzvf ../../../data/unswiot/unswiotflow.tar.gz -C ../../../data/input/unswiot/; fi
# specify the location of the data files
DATA_PATH = "../../../data/input/unswiot/"
# the sample PCAP file used for explanation
DATA_PCAP = DATA_PATH + "small_sample.pcap"
# the flow connection log (conn.log) file
DATA_SOURCE = DATA_PATH + "conn.log"
# the data label file (matches IP addresses with MAC addresses)
DATA_LABELS = DATA_PATH + "lab_mac_labels_cats.csv"
###Output
_____no_output_____
###Markdown
Background The Internet of Things and Data at a Massive ScaleGartner estimates there are currently over 8.4 billion Internet of Things (IoT) devices. By 2020, that number is [estimated to surpass 20 billion](https://www.zdnet.com/article/iot-devices-will-outnumber-the-worlds-population-this-year-for-the-first-time/). These types of devices range from consumer devices (e.g., Amazon Echo, smart TVs, smart cameras, door bells) to commercial devices (e.g., building automation systems, keycard entry). All of these devices exhibit behavior on the Internet as they communicate back with their own clouds and user-specified integrations. Types of Network DataThe most detailed type of data that is typically collected on a network is full Packet CAPture (PCAP) data. This information is detailed and contains everything about the communication, including: source address, destination address, protocols used, bytes transferred, and even the raw data (e.g., image, audio file, executable). PCAP data is fine-grained, meaning that there is a record for each frame being transmitted. A typical communication is composed of many individual packets/frames.If we aggregate PCAP data so that there is one row of data per communication session, we call that flow level data. A simplified example of this relationship is shown in the figure below.For this tutorial, we use data from the University of New South Wales. In a lab environment, they [collected nearly three weeks of IoT data from 21 IoT devices](http://149.171.189.1). They also kept a detailed [list of devices by MAC address](http://149.171.189.1/resources/List_Of_Devices.txt), so we have ground-truth with respect to each IoT device's behavior on the network.**Our goal is to utilize the behavior exhibited in the network data to classify IoT devices.** Data Investigation Let's first see some of the data. We'll load a PCAP file in using Scapy. If you don't want to or can't install Scapy, feel free to skip this section.
###Code
!pip install -q scapy
from scapy.all import *
cap = rdpcap(DATA_PCAP)
eth_frame = cap[3]
ip_pkt = eth_frame.payload
segment = ip_pkt.payload
data = segment.payload
eth_frame.show()
###Output
_____no_output_____
###Markdown
There's really a lot of features there. In addition to having multiple layers (which may differ between packets), there are a number of other issues with working directly with PCAP. Often the payload (the `Raw` section above) is encrypted, rendering it useless. The lack of aggregation also makes it difficult to differentiate between packets. What we really care about for this application is what a *session* looks like. In other words, how a Roku interacts with the network is likely quite different than how a Google Home interacts. To save time for the tutorial, all three weeks of PCAP data have already been transformed to flow data, and we can load that in to a typical Pandas dataframe. Due to how the data was created, we have a header row (with column names) as well as a footer row. We've already removed those rows, so nothing to do here.For this application, we used [Zeek](https://www.zeek.org) (formerly known as Bro) to construct the flow data. To include MAC addresses in the conn log, we used the [mac-logging.zeek script](https://github.com/bro/bro/blob/master/scripts/policy/protocols/conn/mac-logging.zeek).If you've skipped installing Scapy, you can pick up here.
###Code
import cudf as cd
import pandas as pd
import nvstrings
from collections import OrderedDict
%%time
pdf = pd.read_csv(DATA_SOURCE, sep='\t')
print("==> pdf shape: ",pdf.shape)
###Output
_____no_output_____
###Markdown
We can look at what this new aggregated data looks like, and get a better sense of the columns and their data types. Let's do this the way we're familiar with, using Pandas.
###Code
pdf.head()
pdf.dtypes
###Output
_____no_output_____
###Markdown
That's Pandas, and we could continue the analysis there if we wanted. But what about [cuDF](https://github.com/rapidsai/cudf)? Let's pivot to that for the majority of this tutorial.One thing cuDF neeeds is for us to specify the data types. We'll write a function to make this easier. As of version 0.6, [strings are supported in cuDF](https://rapidsai.github.io/projects/cudf/en/latest/10min.html?highlight=stringString-Methods). We'll make use of that here.
###Code
def get_dtypes(fn, delim, floats, strings):
with open(fn, errors='replace') as fp:
header = fp.readline().strip()
types = []
for col in header.split(delim):
if 'date' in col:
types.append((col, 'date'))
elif col in floats:
types.append((col, 'float64'))
elif col in strings:
types.append((col, 'str'))
else:
types.append((col, 'int64'))
return OrderedDict(types)
dtypes_data_processed = get_dtypes(DATA_SOURCE, '\t', floats=['ts','duration'],
strings=['uid','id.orig_h','id.resp_h','proto','service',
'conn_state','local_orig','local_resp',
'history','tunnel_parents','orig_l2_addr',
'resp_l2_addr'])
%%time
raw_cdf = cd.io.csv.read_csv(DATA_SOURCE, delimiter='\t', names=list(dtypes_data_processed),
dtype=list(dtypes_data_processed.values()), skiprows=1)
dtypes_data_processed
###Output
_____no_output_____
###Markdown
Those data types seem right. Let's see what this data looks like now that it's in cuDF.
###Code
print(raw_cdf.head())
###Output
_____no_output_____
###Markdown
Adding ground truth labels back to the data We'll need some labels for our classification task, so we've already prepared a file with those labels.
###Code
dtypes_labels_processed = get_dtypes(DATA_LABELS, ',', floats=[],
strings=['device','mac','connection','category'])
labels_cdf = cd.io.csv.read_csv(DATA_LABELS, delimiter=',', names=list(dtypes_labels_processed),
dtype=list(dtypes_labels_processed.values()), skiprows=1)
print(labels_cdf.head())
dtypes_labels_processed
###Output
_____no_output_____
###Markdown
We now perform a series of merges to add the ground truth data (device name, connection, category, and categoryID) back to the dataset. Since each row of netflow has two participants, we'll have to do this twice - once for the originator (source) and once for the responder (destination).
###Code
%%time
labels_cdf.columns = ['orig_device','orig_l2_addr','orig_connection','orig_category','orig_category_id']
merged_cdf = cd.merge(raw_cdf, labels_cdf, how='left', on='orig_l2_addr')
labels_cdf.columns = ['resp_device','resp_l2_addr','resp_connection','resp_category','resp_category_id']
merged_cdf = cd.merge(merged_cdf, labels_cdf, how='left')
###Output
_____no_output_____
###Markdown
Let's reset the `labels_cdf` column names for our own sanity.
###Code
labels_cdf.columns = ['device','mac','connection','category','category_id']
###Output
_____no_output_____
###Markdown
Let's just look at our new dataset to make sure everything's okay.
###Code
print(merged_cdf.head())
merged_cdf.dtypes
###Output
_____no_output_____
###Markdown
Exploding the Netflow Data into Originator and Responder Rows We now have netflow that has one row per (sessionized) communication between an originator and responder. However, in order to classify an individual device, we need to explode data. Instead of one row that contains both originator and responder, we'll explode to one row for originator information (orig_bytes, orig_pkts, orig_ip_bytes) and one for responder information (resp_bytes, resp_pkts, resp_ip_bytes).The easiest way to do this is to create two new dataframes, rename all of the columns, then `concat` them back together. Just for sanity, we'll also check the new shape of our exploded data frame.
###Code
orig_comms_cdf = merged_cdf[['ts','id.orig_h','id.orig_p','proto','service','duration',
'orig_bytes','orig_pkts','orig_ip_bytes','orig_device',
'orig_l2_addr','orig_category','orig_category_id']]
orig_comms_cdf.columns = ['ts','ip','port','proto','service','duration','bytes','pkts',
'ip_bytes','device','mac','category','category_id']
resp_comms_cdf = merged_cdf[['ts','id.resp_h','id.resp_p','proto','service','duration',
'resp_bytes','resp_pkts','resp_ip_bytes','resp_device',
'resp_l2_addr','resp_category','resp_category_id']]
resp_comms_cdf.columns = ['ts','ip','port','proto','service','duration','bytes','pkts',
'ip_bytes','device','mac','category','category_id']
exploded_cdf = cd.multi.concat([orig_comms_cdf, resp_comms_cdf])
print("==> shape (original) =", merged_cdf.shape)
print("==> shape =", exploded_cdf.shape)
###Output
_____no_output_____
###Markdown
We're going to need the number of categories (classes) quite a bit, so we'll make a variable for it for easier access. For this tutorial using the data originally presented, we should have 13 categories.
###Code
num_categories = labels_cdf['category_id'].unique().shape[0]
print("==> number of IoT categories =", num_categories)
###Output
_____no_output_____
###Markdown
We currently need to remove null values before we proceed. Although `dropna` doesn't exist in cuDF yet, we can use a workaround to get us there. Also, due to what's available currently, we can't have any nulls in any place in the DF.
###Code
for col in exploded_cdf.columns:
print(col, exploded_cdf[col].null_count)
exploded_cdf['category_id'] = exploded_cdf['category_id'].fillna(-999)
exploded_cdf['device'] = exploded_cdf['device'].str.fillna("none")
exploded_cdf['category'] = exploded_cdf['category'].str.fillna("none")
for col in exploded_cdf.columns:
print(col, exploded_cdf[col].null_count)
###Output
_____no_output_____
###Markdown
Looks like all the null values are gone, so now we can proceed. If an IP doesn't have a category ID, we can't use it. So we'll filter those out.
###Code
exploded_cdf = exploded_cdf[exploded_cdf['category_id'] != -999]
exploded_cdf.shape
###Output
_____no_output_____
###Markdown
Binning the Data and Aggregating the Features But wait, there's still more data wrangling to be done! While we've exploded the flows into rows for orig/resp, we may want to bin the data further by time. The rationale is that any single communication may not be an accurate representation of how a device typically reacts in its environment. Imagine the simple case of how a streaming camera typically operates (most of its data will be uploaded from the device to a destination) versus how it operates during a firmware update (most of the data will be pushed down to the device, after which a brief disruption in connectivity will occur).There's a lof ot different time binning we could do. It also would be useful to investigate what the average duration of connection is relative to how many connections per time across various time granularities. With that said, we'll just choose a time bin of 1 hour to begin with. In order to bin, we'll use the following formula:$$\text{hour_time_bin}=\left\lfloor{\frac{ts}{60*60}}\right\rfloor$$
###Code
import math
exploded_cdf['hour_time_bin'] = exploded_cdf['ts'].applymap(lambda x: math.floor(x/(60*60))).astype(int)
###Output
_____no_output_____
###Markdown
We also have to make a choice about how we'll aggregate the binned data. One of the simplest ways is to sum the bytes and packets. There are really two choices for bytes, `bytes` and `ip_bytes`. With Bro, `bytes` is taken from the TCP sequence numbers and is potentially inaccurate, so we select `ip_bytes` instead for both originator and responder. We'll also use the sum of the number of packets.
###Code
one_hour_time_bin_cdf = (exploded_cdf[['bytes','pkts','ip_bytes',
'mac','category_id',
'hour_time_bin']]
.groupby(['mac','category_id','hour_time_bin'])
.agg({'bytes':'sum',
'pkts':'sum',
'ip_bytes':'sum'})
)
one_hour_time_bin_cdf.columns = ['mac', 'category_id', 'hour_time_bin',
'bytes', 'pkts', 'ip_bytes']
###Output
_____no_output_____
###Markdown
Creating the Training and Testing Datasets We'll take a tradition 70/30 train/test split, and we'll randomly sample into a train and test data frame.
###Code
import numpy as np
cdf_msk = np.random.rand(len(one_hour_time_bin_cdf)) < 0.7
train_cdf = one_hour_time_bin_cdf[cdf_msk]
test_cdf = one_hour_time_bin_cdf[~cdf_msk]
print("==> train length =",len(train_cdf))
print("==> test length =",len(test_cdf))
###Output
_____no_output_____
###Markdown
Prepare the training input (`train_X`), training target (`train_Y`), test input (`test_X`) and test target (`test_Y`) datasets.
###Code
train_X = train_cdf[['pkts','ip_bytes']]
train_Y = train_cdf[['category_id']]
test_X = test_cdf[['pkts','ip_bytes']]
test_Y = test_cdf[['category_id']]
###Output
_____no_output_____
###Markdown
Now we just look at the head of both of these datasets (just a quick sanity check).
###Code
print(train_X.head())
print(train_Y.head())
###Output
_____no_output_____
###Markdown
Configure XGBoost We choose a classification algorithm that utilizes the GPU - [XGBoost](https://xgboost.readthedocs.io/en/latest/). The package provides support for gradient boosted trees and can leverage distributed GPU compute environments.
###Code
import xgboost as xgb
###Output
_____no_output_____
###Markdown
Getting data into a format for XGBoost is really easy. Just make a `DMatrix` for both training and testin.
###Code
xg_train = xgb.DMatrix(train_X, label=train_Y)
xg_test = xgb.DMatrix(test_X, label=test_Y)
###Output
_____no_output_____
###Markdown
Like any good ML package, there's quite a few parameters to set. We're going to start with the softmax objective function. This will let us get a predicted category out of our model. We'll also set other parameters like the maximum depth and number of threads. You can read more about the parameters [here](https://xgboost.readthedocs.io/en/latest/parameter.html). Experiment with them!
###Code
param = {}
param['objective'] = 'multi:softmax'
param['eta'] = 0.1
param['max_depth'] = 8
param['silent'] = 1
param['nthread'] = 4
param['num_class'] = num_categories
param['max_features'] = 'auto'
param['n_gpus'] = 1
param['tree_method'] = 'gpu_hist'
# param
###Output
_____no_output_____
###Markdown
XGBoost allows us to define a watchlist so what we can keep track of performance as the algorithm trains. We'll configure a simple watchlist that is watching `xg_train` and `xg_gest` error rates.
###Code
watchlist = [(xg_train, 'train'), (xg_test, 'test')]
num_round = 20
###Output
_____no_output_____
###Markdown
Training our First XGBoost Model Now it's time to train
###Code
bst = xgb.train(param, xg_train, num_round, watchlist)
###Output
_____no_output_____
###Markdown
Prediction is also easy (and fast).
###Code
pred = bst.predict(xg_test)
###Output
_____no_output_____
###Markdown
We might want to get a sense of how our model is by calculating the error rate.
###Code
pred_cdf = cd.from_pandas(pd.DataFrame(pred, columns=['pred']))
pred_cdf.add_column('category_id',test_Y['category_id'])
error_rate = (pred_cdf[pred_cdf['pred'] != pred_cdf['category_id']]['pred'].count()) / test_Y.shape[0]
error_rate
###Output
_____no_output_____
###Markdown
That's not great, but it's not terrible considering we made quite a few seemingly abritrary decisions in both the feature selection and aggregation phases. Maybe we want to get some more insight into how our model is performing by analyzing the ROC curves for each class, micro average, and macro average. We'll revert back to traditional Python data science tools to do this analysis. Analyzing the Model's Performance We'll start by importing some packages we'll need to perform this analysis. For simplicity in an already large notebook, we'll put them in a single cell.
###Code
# sklearn is used to binarize the labels as well as calculate ROC and AUC
from sklearn.metrics import roc_curve, auc,recall_score,precision_score
from sklearn.preprocessing import label_binarize
# scipy is used for interpolating the ROC curves
from scipy import interp
# our old friend matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
# choose whatever style you want
plt.style.use('fivethirtyeight')
# cycle is used just to make different colors for the different ROC curves
from itertools import cycle
###Output
_____no_output_____
###Markdown
A ROC curve analysis can be trickey for multiclass problems. One way to deal with it is to look at the ROC curve for each class. We'll take some steps to format our data so that it plays nicely with input requirements from sklearn (ah 80/20 rule, we meet again). We also will need to rerun our model with a different objective function. Rerunning the Model with the `softprob` Objective Function We used the `softmax` objective function above, but what we really want out of model this time is probabilities that a netflow communication belongs to each of the classes. This is easy enough to do with XGBoost, as we just change the objective function to `softprob`. For simplicity, all of the configuration is in a single cell below rather than spread out. Note the only difference is the objective function change.
###Code
cdf_msk = np.random.rand(len(one_hour_time_bin_cdf)) < 0.7
train_cdf = one_hour_time_bin_cdf[cdf_msk]
test_cdf = one_hour_time_bin_cdf[~cdf_msk]
train_X = train_cdf[['pkts','ip_bytes']]
train_Y = train_cdf[['category_id']]
test_X = test_cdf[['pkts','ip_bytes']]
test_Y = test_cdf[['category_id']]
xg_train = xgb.DMatrix(train_X, label=train_Y)
xg_test = xgb.DMatrix(test_X, label=test_Y)
param = {}
param['objective'] = 'multi:softprob'
param['eta'] = 0.1
param['max_depth'] = 8
param['silent'] = 1
param['nthread'] = 4
param['num_class'] = num_categories
param['n_gpus'] = 1
param['tree_method'] = 'gpu_hist'
watchlist = [(xg_train, 'train'), (xg_test, 'test')]
num_round = 20
###Output
_____no_output_____
###Markdown
Train the model.
###Code
bst = xgb.train(param, xg_train, num_round, watchlist)
###Output
_____no_output_____
###Markdown
Okay, so we have our new model. We now take some steps to make sure the data is in a format that makes sklearn happy. First we'll use the `predict` function to compute the probabilities. To extend `roc_curve` to multiclass, we'll also need to binarize the labels. Let's keep our sanity by also making sure the lengths match.
###Code
len(bst.predict(xg_test))
probs = bst.predict(xg_test).reshape(test_Y.shape[0],param['num_class'])
###Output
_____no_output_____
###Markdown
For now, we need to convert the `test_Y` cuDF to an array. The most straightforward way to do that is to go through Pandas. It also lets us show off how nicely we can convert to Pandas, should the need arise.
###Code
test_Y_binarize = label_binarize(test_Y.to_pandas()['category_id'].values, classes=np.arange(param['num_class']))
print("==> length of probs =",len(probs))
print("==> length of test_Y_binarize =", len(test_Y_binarize))
###Output
_____no_output_____
###Markdown
Some more housekeeping. We'll create Python dictionaries to hold FPR ([false positive rate](https://en.wikipedia.org/wiki/False_positive_rate)), TPR ([true positive rate](https://en.wikipedia.org/wiki/Sensitivity_and_specificity)), and AUC ([area under the curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristicArea_under_the_curve)) values.
###Code
fpr = dict()
tpr = dict()
roc_auc = dict()
###Output
_____no_output_____
###Markdown
For each of our classes, we'll computer FPR, TPR, and AUC. We're also compute the [micro and macro averages](http://rushdishams.blogspot.com/2011/08/micro-and-macro-average-of-precision.html).
###Code
print("==> number of classes =", num_categories)
# calculate FPR, TPR, and ROC AUC for every class
for i in range(num_categories):
fpr[i], tpr[i], _ = roc_curve(test_Y_binarize[:, i], probs[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# calculate the micro average FPR, TPR, and ROC AUC (we'll calculate the macro average below)
fpr["micro"], tpr["micro"], _ = roc_curve(test_Y_binarize.ravel(), probs.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
###Output
_____no_output_____
###Markdown
Plotting the ROC Curves Phew! Lots of code below, but it's fairly straightforward and [adapted from an example in the scikit-learn documentation](http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.htmlmulticlass-settings). Before we plot though, we'll create a simple category lookup dictionary so we can label the classes with their actual names (not their category IDs).
###Code
labels_pdf = labels_cdf.to_pandas()
category_lookup = labels_pdf[['category','category_id']].drop_duplicates().set_index('category_id').T.to_dict()
# aggregate all of the false positive rates across all classes
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(num_categories)]))
# interpolate all of the ROC curves
mean_tpr = np.zeros_like(all_fpr)
for i in range(param['num_class']):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# average the TPR
mean_tpr /= num_categories
# compute the macro average FPR, TPR, and ROC AUC
fpr['macro'] = all_fpr
tpr['macro'] = mean_tpr
roc_auc['macro'] = auc(fpr['macro'], tpr['macro'])
# plot all of the ROC curves on a single plot (for comparison)
plt.figure(figsize=(9,9))
plt.plot(fpr['micro'], tpr['micro'],
label="micro-average ROC curve (area = {0:0.2f})"
"".format(roc_auc['micro']),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr['macro'], tpr['macro'],
label="macro-average ROC curve (area = {0:0.2f})"
"".format(roc_auc['macro']),
color='navy', linestyle=':', linewidth=4)
num_colors = param['num_class']
cm = plt.get_cmap('gist_rainbow')
colors = cycle([cm(1.*i/num_colors) for i in range(num_colors)])
lw = 2
for i, color in zip(range(param['num_class']), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label="ROC curve for "+category_lookup[i]['category']+" class (area = {1:0.2f})"
"".format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate", fontsize=12)
plt.ylabel("True Positive Rate", fontsize=12)
plt.title("ROC Curves for IoT Device Categories")
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
It's not a *terrible* plot, but it gets a little messy. We can also plot each class as its own subplot.First we make a few variables so we can control the layout.
###Code
total_subplots = num_categories
plot_grid_cols = 3
plot_grid_rows = total_subplots // plot_grid_cols
plot_grid_rows += total_subplots % plot_grid_cols
position_index = range(1, total_subplots+1)
###Output
_____no_output_____
###Markdown
Now we make the grid of plots.
###Code
plt.figure()
fig, axs = plt.subplots(plot_grid_rows, plot_grid_cols, sharex=True, sharey=True, figsize=(15,15))
lw = 2
plt_num = 0
for row in range(plot_grid_rows):
for col in range(plot_grid_cols):
if(plt_num <= 12):
axs[row,col].plot(fpr[plt_num], tpr[plt_num], lw=lw)
axs[row,col].set_title(category_lookup[plt_num]['category']+' Devices ROC Curve', fontsize=14)
axs[row,col].text(0.7, 0.1,"AUC = {:.4f}".format(roc_auc[plt_num]), size=11)
elif(plt_num == 13):
axs[row,col].plot(fpr['micro'], tpr['micro'], lw=lw)
axs[row,col].set_title("Micro Average ROC Curve", fontsize=14)
axs[row,col].text(0.7, 0.1,"AUC = {:.4f}".format(roc_auc['micro']), size=12)
elif(plt_num == 14):
axs[row,col].plot(fpr['macro'], tpr['macro'], lw=lw)
axs[row,col].set_title("Macro Average ROC Curve", fontsize=14)
axs[row,col].text(0.7, 0.1,"AUC = {:.4f}".format(roc_auc['macro']), size=12)
axs[row,col].set_xlabel('False Positive Rate', fontsize=10)
axs[row,col].set_ylabel('True Positive Rate', fontsize=10)
plt_num += 1
plt.xlim([-0.01, 1.0])
plt.ylim([0.0, 1.05])
plt.subplots_adjust(wspace=0.2, hspace=0.4)
plt.show()
###Output
_____no_output_____
###Markdown
Conclusions As we've shown, it's possible to get fairly decent multiclass classification results for IoT data using only basic features (bytes and packets) when aggregated. This isn't surprising, based on the fact that we used expert knowledge to assign category labels. In addition, the majority of the time, IoT devices are in a "steady state" (idle), and are not heavily influenced by human interaction. This lets us take larger samples (e.g., aggregate to longer time bins) while still maintaining decent classification performance. It should also be noted that this is a very clean dataset. The traffic is mainly IoT traffic (e.g., little traditional compute traffic), and there are no intentional abnormal activities injected (e.g., red teaming).We used Bro data, but it's also possible to use the raw PCAP data as input for classification. The preprocessing steps are more arduous than for flow data though. It'd be a great exercise... More to Explore: Possible Exercises (1) It may be useful to investigate other time binnings. Can you build another model that uses data binned to a different granularity (e.g., 5 minutes)?
###Code
# your work here
###Output
_____no_output_____
###Markdown
(2) We used the `sum` of bytes and packets for a device when aggregated to the hour. What about other ways to handle these quantitative features (e.g., average)? Would that improve the classification results?
###Code
# your work here
###Output
_____no_output_____
###Markdown
(3) We selected specific parameters for XGBoost. These could probably use a bit more thought. You can [read more about the parameters](https://xgboost.readthedocs.io/en/latest/parameter.html) and try adjusting them on our previous dataset.
###Code
# a reminder about our parameters
print(param)
# your work here
###Output
_____no_output_____
###Markdown
(4) There are additional features in the netflow data that we didn't use. Some other quantitative fields (e.g., duration) and categorical fields (e.g., protocol, service, ports) may be useful for classification. Build another XGBoost model using some/all of these fields.
###Code
# your work here
###Output
_____no_output_____
###Markdown
Cyber Use Case Tutorial: Multiclass Classification on IoT Flow Data with XGBoost Goals:- Learn the basics of cyber network data with respect to consumer IoT devices- Load network data into a cuDF- Explore network data and features- Use XGBoost to build a classification model- Evaluate the model To get started, we'll make sure the data is available and in the expected location. If you already have the data on your machine, change the `DATA_PATH` location to point to the appropriate location.
###Code
import os
import urllib.request
# specify the location of the data files
DATA_PATH = '../../../data/unswiot/'
if not os.path.exists(DATA_PATH):
print('creating unswiot data directory')
os.system('mkdir ../../../data/unswiot')
base_url = 'https://s3.us-east-2.amazonaws.com/rapidsai-data/datasets/unsw_iot/'
fn = 'unswiotflow.tar.gz'
if not os.path.isfile(DATA_PATH+fn):
print(f'Downloading {base_url+fn} to {DATA_PATH+fn}')
urllib.request.urlretrieve(base_url+fn, DATA_PATH+fn)
import tarfile
tar = tarfile.open(DATA_PATH+fn, "r:gz")
for tarinfo in tar:
print(tarinfo.name, "is", tarinfo.size, "bytes in size and is", end="")
if tarinfo.isreg():
print(" a regular file.")
elif tarinfo.isdir():
print(" a directory.")
else:
print(" something else.")
tar.extractall(DATA_PATH)
tar.close()
# the sample PCAP file used for explanation
DATA_PCAP = DATA_PATH + "small_sample.pcap"
# the flow connection log (conn.log) file
DATA_SOURCE = DATA_PATH + "conn.log"
# the data label file (matches IP addresses with MAC addresses)
DATA_LABELS = DATA_PATH + "lab_mac_labels_cats.csv"
###Output
_____no_output_____
###Markdown
Background The Internet of Things and Data at a Massive ScaleGartner estimates there are currently over 8.4 billion Internet of Things (IoT) devices. By 2020, that number is [estimated to surpass 20 billion](https://www.zdnet.com/article/iot-devices-will-outnumber-the-worlds-population-this-year-for-the-first-time/). These types of devices range from consumer devices (e.g., Amazon Echo, smart TVs, smart cameras, door bells) to commercial devices (e.g., building automation systems, keycard entry). All of these devices exhibit behavior on the Internet as they communicate back with their own clouds and user-specified integrations. Types of Network DataThe most detailed type of data that is typically collected on a network is full Packet CAPture (PCAP) data. This information is detailed and contains everything about the communication, including: source address, destination address, protocols used, bytes transferred, and even the raw data (e.g., image, audio file, executable). PCAP data is fine-grained, meaning that there is a record for each frame being transmitted. A typical communication is composed of many individual packets/frames.If we aggregate PCAP data so that there is one row of data per communication session, we call that flow level data. A simplified example of this relationship is shown in the figure below.For this tutorial, we use data from the University of New South Wales. In a lab environment, they [collected nearly three weeks of IoT data from 21 IoT devices](http://149.171.189.1). They also kept a detailed [list of devices by MAC address](http://149.171.189.1/resources/List_Of_Devices.txt), so we have ground-truth with respect to each IoT device's behavior on the network.**Our goal is to utilize the behavior exhibited in the network data to classify IoT devices.** Data Investigation Let's first see some of the data. We'll load a PCAP file in using Scapy. If you don't want to or can't install Scapy, feel free to skip this section.
###Code
!pip install -q scapy
from scapy.all import *
cap = rdpcap(DATA_PCAP)
eth_frame = cap[3]
ip_pkt = eth_frame.payload
segment = ip_pkt.payload
data = segment.payload
eth_frame.show()
###Output
_____no_output_____
###Markdown
There's really a lot of features there. In addition to having multiple layers (which may differ between packets), there are a number of other issues with working directly with PCAP. Often the payload (the `Raw` section above) is encrypted, rendering it useless. The lack of aggregation also makes it difficult to differentiate between packets. What we really care about for this application is what a *session* looks like. In other words, how a Roku interacts with the network is likely quite different than how a Google Home interacts. To save time for the tutorial, all three weeks of PCAP data have already been transformed to flow data, and we can load that in to a typical Pandas dataframe. Due to how the data was created, we have a header row (with column names) as well as a footer row. We've already removed those rows, so nothing to do here.For this application, we used [Zeek](https://www.zeek.org) (formerly known as Bro) to construct the flow data. To include MAC addresses in the conn log, we used the [mac-logging.zeek script](https://github.com/bro/bro/blob/master/scripts/policy/protocols/conn/mac-logging.zeek).If you've skipped installing Scapy, you can pick up here.
###Code
import cudf as cd
import pandas as pd
import nvstrings
from collections import OrderedDict
%%time
pdf = pd.read_csv(DATA_SOURCE, sep='\t')
print("==> pdf shape: ",pdf.shape)
###Output
_____no_output_____
###Markdown
We can look at what this new aggregated data looks like, and get a better sense of the columns and their data types. Let's do this the way we're familiar with, using Pandas.
###Code
pdf.head()
pdf.dtypes
###Output
_____no_output_____
###Markdown
That's Pandas, and we could continue the analysis there if we wanted. But what about [cuDF](https://github.com/rapidsai/cudf)? Let's pivot to that for the majority of this tutorial.One thing cuDF neeeds is for us to specify the data types. We'll write a function to make this easier. As of version 0.6, [strings are supported in cuDF](https://rapidsai.github.io/projects/cudf/en/latest/10min.html?highlight=stringString-Methods). We'll make use of that here.
###Code
def get_dtypes(fn, delim, floats, strings):
with open(fn, errors='replace') as fp:
header = fp.readline().strip()
types = []
for col in header.split(delim):
if 'date' in col:
types.append((col, 'date'))
elif col in floats:
types.append((col, 'float64'))
elif col in strings:
types.append((col, 'str'))
else:
types.append((col, 'int64'))
return OrderedDict(types)
dtypes_data_processed = get_dtypes(DATA_SOURCE, '\t', floats=['ts','duration'],
strings=['uid','id.orig_h','id.resp_h','proto','service',
'conn_state','local_orig','local_resp',
'history','tunnel_parents','orig_l2_addr',
'resp_l2_addr'])
%%time
raw_cdf = cd.io.csv.read_csv(DATA_SOURCE, delimiter='\t', names=list(dtypes_data_processed),
dtype=list(dtypes_data_processed.values()), skiprows=1)
dtypes_data_processed
###Output
_____no_output_____
###Markdown
Those data types seem right. Let's see what this data looks like now that it's in cuDF.
###Code
print(raw_cdf.head())
###Output
_____no_output_____
###Markdown
Adding ground truth labels back to the data We'll need some labels for our classification task, so we've already prepared a file with those labels.
###Code
dtypes_labels_processed = get_dtypes(DATA_LABELS, ',', floats=[],
strings=['device','mac','connection','category'])
labels_cdf = cd.io.csv.read_csv(DATA_LABELS, delimiter=',', names=list(dtypes_labels_processed),
dtype=list(dtypes_labels_processed.values()), skiprows=1)
print(labels_cdf.head())
dtypes_labels_processed
###Output
_____no_output_____
###Markdown
We now perform a series of merges to add the ground truth data (device name, connection, category, and categoryID) back to the dataset. Since each row of netflow has two participants, we'll have to do this twice - once for the originator (source) and once for the responder (destination).
###Code
%%time
labels_cdf.columns = ['orig_device','orig_l2_addr','orig_connection','orig_category','orig_category_id']
merged_cdf = cd.merge(raw_cdf, labels_cdf, how='left', on='orig_l2_addr')
labels_cdf.columns = ['resp_device','resp_l2_addr','resp_connection','resp_category','resp_category_id']
merged_cdf = cd.merge(merged_cdf, labels_cdf, how='left')
###Output
_____no_output_____
###Markdown
Let's reset the `labels_cdf` column names for our own sanity.
###Code
labels_cdf.columns = ['device','mac','connection','category','category_id']
###Output
_____no_output_____
###Markdown
Let's just look at our new dataset to make sure everything's okay.
###Code
print(merged_cdf.head())
merged_cdf.dtypes
###Output
_____no_output_____
###Markdown
Exploding the Netflow Data into Originator and Responder Rows We now have netflow that has one row per (sessionized) communication between an originator and responder. However, in order to classify an individual device, we need to explode data. Instead of one row that contains both originator and responder, we'll explode to one row for originator information (orig_bytes, orig_pkts, orig_ip_bytes) and one for responder information (resp_bytes, resp_pkts, resp_ip_bytes).The easiest way to do this is to create two new dataframes, rename all of the columns, then `concat` them back together. Just for sanity, we'll also check the new shape of our exploded data frame.
###Code
orig_comms_cdf = merged_cdf[['ts','id.orig_h','id.orig_p','proto','service','duration',
'orig_bytes','orig_pkts','orig_ip_bytes','orig_device',
'orig_l2_addr','orig_category','orig_category_id']]
orig_comms_cdf.columns = ['ts','ip','port','proto','service','duration','bytes','pkts',
'ip_bytes','device','mac','category','category_id']
resp_comms_cdf = merged_cdf[['ts','id.resp_h','id.resp_p','proto','service','duration',
'resp_bytes','resp_pkts','resp_ip_bytes','resp_device',
'resp_l2_addr','resp_category','resp_category_id']]
resp_comms_cdf.columns = ['ts','ip','port','proto','service','duration','bytes','pkts',
'ip_bytes','device','mac','category','category_id']
exploded_cdf = cd.concat([orig_comms_cdf, resp_comms_cdf])
print("==> shape (original) =", merged_cdf.shape)
print("==> shape =", exploded_cdf.shape)
###Output
_____no_output_____
###Markdown
We're going to need the number of categories (classes) quite a bit, so we'll make a variable for it for easier access. For this tutorial using the data originally presented, we should have 13 categories.
###Code
num_categories = labels_cdf['category_id'].unique().shape[0]
print("==> number of IoT categories =", num_categories)
###Output
_____no_output_____
###Markdown
We currently need to remove null values before we proceed. Although `dropna` doesn't exist in cuDF yet, we can use a workaround to get us there. Also, due to what's available currently, we can't have any nulls in any place in the DF.
###Code
for col in exploded_cdf.columns:
print(col, exploded_cdf[col].null_count)
exploded_cdf['category_id'] = exploded_cdf['category_id'].fillna(-999)
exploded_cdf['device'] = exploded_cdf['device'].str.fillna("none")
exploded_cdf['category'] = exploded_cdf['category'].str.fillna("none")
for col in exploded_cdf.columns:
print(col, exploded_cdf[col].null_count)
###Output
_____no_output_____
###Markdown
Looks like all the null values are gone, so now we can proceed. If an IP doesn't have a category ID, we can't use it. So we'll filter those out.
###Code
exploded_cdf = exploded_cdf[exploded_cdf['category_id'] != -999]
exploded_cdf.shape
###Output
_____no_output_____
###Markdown
Binning the Data and Aggregating the Features But wait, there's still more data wrangling to be done! While we've exploded the flows into rows for orig/resp, we may want to bin the data further by time. The rationale is that any single communication may not be an accurate representation of how a device typically reacts in its environment. Imagine the simple case of how a streaming camera typically operates (most of its data will be uploaded from the device to a destination) versus how it operates during a firmware update (most of the data will be pushed down to the device, after which a brief disruption in connectivity will occur).There's a lof ot different time binning we could do. It also would be useful to investigate what the average duration of connection is relative to how many connections per time across various time granularities. With that said, we'll just choose a time bin of 1 hour to begin with. In order to bin, we'll use the following formula:$$\text{hour_time_bin}=\left\lfloor{\frac{ts}{60*60}}\right\rfloor$$
###Code
import math
exploded_cdf['hour_time_bin'] = exploded_cdf['ts'].applymap(lambda x: math.floor(x/(60*60))).astype(int)
###Output
_____no_output_____
###Markdown
We also have to make a choice about how we'll aggregate the binned data. One of the simplest ways is to sum the bytes and packets. There are really two choices for bytes, `bytes` and `ip_bytes`. With Bro, `bytes` is taken from the TCP sequence numbers and is potentially inaccurate, so we select `ip_bytes` instead for both originator and responder. We'll also use the sum of the number of packets.
###Code
one_hour_time_bin_cdf = (exploded_cdf[['bytes','pkts','ip_bytes','mac','category_id','hour_time_bin']]
.groupby(['mac','category_id','hour_time_bin'])
.agg({'bytes':'sum',
'pkts':'sum',
'ip_bytes':'sum'})
)
one_hour_time_bin_cdf.head()
###Output
_____no_output_____
###Markdown
Creating the Training and Testing Datasets We'll take a tradition 70/30 train/test split, and we'll randomly sample into a train and test data frame.
###Code
import numpy as np
cdf_msk = np.random.rand(len(one_hour_time_bin_cdf)) < 0.7
train_mask = cd.Series(cdf_msk)
test_mask = ~train_mask
train_cdf = one_hour_time_bin_cdf[cd.Series(cdf_msk)]
test_cdf = one_hour_time_bin_cdf[~cdf_msk]
print("==> train length =",len(train_cdf))
print("==> test length =",len(test_cdf))
###Output
_____no_output_____
###Markdown
Prepare the training input (`train_X`), training target (`train_Y`), test input (`test_X`) and test target (`test_Y`) datasets.
###Code
train_X = train_cdf[['pkts','ip_bytes']]
train_Y = train_cdf.index.get_level_values(1)
test_X = test_cdf[['pkts','ip_bytes']]
test_Y = test_cdf.index.get_level_values(1)
###Output
_____no_output_____
###Markdown
Now we just look at the head of both of these datasets (just a quick sanity check).
###Code
print(train_X.head())
print(train_Y.head())
###Output
_____no_output_____
###Markdown
Configure XGBoost We choose a classification algorithm that utilizes the GPU - [XGBoost](https://xgboost.readthedocs.io/en/latest/). The package provides support for gradient boosted trees and can leverage distributed GPU compute environments.
###Code
import xgboost as xgb
###Output
_____no_output_____
###Markdown
Getting data into a format for XGBoost is really easy. Just make a `DMatrix` for both training and testin.
###Code
xg_train = xgb.DMatrix(train_X, label=train_Y)
xg_test = xgb.DMatrix(test_X, label=test_Y)
###Output
_____no_output_____
###Markdown
Like any good ML package, there's quite a few parameters to set. We're going to start with the softmax objective function. This will let us get a predicted category out of our model. We'll also set other parameters like the maximum depth and number of threads. You can read more about the parameters [here](https://xgboost.readthedocs.io/en/latest/parameter.html). Experiment with them!
###Code
param = {}
param['objective'] = 'multi:softmax'
param['eta'] = 0.1
param['max_depth'] = 8
param['silent'] = 1
param['nthread'] = 4
param['num_class'] = num_categories
param['max_features'] = 'auto'
param['n_gpus'] = 1
param['tree_method'] = 'gpu_hist'
# param
###Output
_____no_output_____
###Markdown
XGBoost allows us to define a watchlist so what we can keep track of performance as the algorithm trains. We'll configure a simple watchlist that is watching `xg_train` and `xg_gest` error rates.
###Code
watchlist = [(xg_train, 'train'), (xg_test, 'test')]
num_round = 20
###Output
_____no_output_____
###Markdown
Training our First XGBoost Model Now it's time to train
###Code
bst = xgb.train(param, xg_train, num_round, watchlist)
###Output
_____no_output_____
###Markdown
Prediction is also easy (and fast).
###Code
pred = bst.predict(xg_test)
###Output
_____no_output_____
###Markdown
We might want to get a sense of how our model is by calculating the error rate.
###Code
pred_cdf = cd.from_pandas(pd.DataFrame(pred, columns=['pred']))
pred_cdf.add_column('category_id',test_Y['category_id'])
error_rate = (pred_cdf[pred_cdf['pred'] != pred_cdf['category_id']]['pred'].count()) / test_Y.shape[0]
error_rate
###Output
_____no_output_____
###Markdown
That's not great, but it's not terrible considering we made quite a few seemingly abritrary decisions in both the feature selection and aggregation phases. Maybe we want to get some more insight into how our model is performing by analyzing the ROC curves for each class, micro average, and macro average. We'll revert back to traditional Python data science tools to do this analysis. Analyzing the Model's Performance We'll start by importing some packages we'll need to perform this analysis. For simplicity in an already large notebook, we'll put them in a single cell.
###Code
# sklearn is used to binarize the labels as well as calculate ROC and AUC
from sklearn.metrics import roc_curve, auc,recall_score,precision_score
from sklearn.preprocessing import label_binarize
# scipy is used for interpolating the ROC curves
from scipy import interp
# our old friend matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
# choose whatever style you want
plt.style.use('fivethirtyeight')
# cycle is used just to make different colors for the different ROC curves
from itertools import cycle
###Output
_____no_output_____
###Markdown
A ROC curve analysis can be trickey for multiclass problems. One way to deal with it is to look at the ROC curve for each class. We'll take some steps to format our data so that it plays nicely with input requirements from sklearn (ah 80/20 rule, we meet again). We also will need to rerun our model with a different objective function. Rerunning the Model with the `softprob` Objective Function We used the `softmax` objective function above, but what we really want out of model this time is probabilities that a netflow communication belongs to each of the classes. This is easy enough to do with XGBoost, as we just change the objective function to `softprob`. For simplicity, all of the configuration is in a single cell below rather than spread out. Note the only difference is the objective function change.
###Code
cdf_msk = np.random.rand(len(one_hour_time_bin_cdf)) < 0.7
train_cdf = one_hour_time_bin_cdf[cdf_msk]
test_cdf = one_hour_time_bin_cdf[~cdf_msk]
train_X = train_cdf[['pkts','ip_bytes']]
train_Y = train_cdf[['category_id']]
test_X = test_cdf[['pkts','ip_bytes']]
test_Y = test_cdf[['category_id']]
xg_train = xgb.DMatrix(train_X, label=train_Y)
xg_test = xgb.DMatrix(test_X, label=test_Y)
param = {}
param['objective'] = 'multi:softprob'
param['eta'] = 0.1
param['max_depth'] = 8
param['silent'] = 1
param['nthread'] = 4
param['num_class'] = num_categories
param['n_gpus'] = 1
param['tree_method'] = 'gpu_hist'
watchlist = [(xg_train, 'train'), (xg_test, 'test')]
num_round = 20
###Output
_____no_output_____
###Markdown
Train the model.
###Code
bst = xgb.train(param, xg_train, num_round, watchlist)
###Output
_____no_output_____
###Markdown
Okay, so we have our new model. We now take some steps to make sure the data is in a format that makes sklearn happy. First we'll use the `predict` function to compute the probabilities. To extend `roc_curve` to multiclass, we'll also need to binarize the labels. Let's keep our sanity by also making sure the lengths match.
###Code
len(bst.predict(xg_test))
probs = bst.predict(xg_test).reshape(test_Y.shape[0],param['num_class'])
###Output
_____no_output_____
###Markdown
For now, we need to convert the `test_Y` cuDF to an array. The most straightforward way to do that is to go through Pandas. It also lets us show off how nicely we can convert to Pandas, should the need arise.
###Code
test_Y_binarize = label_binarize(test_Y.to_pandas()['category_id'].values, classes=np.arange(param['num_class']))
print("==> length of probs =",len(probs))
print("==> length of test_Y_binarize =", len(test_Y_binarize))
###Output
_____no_output_____
###Markdown
Some more housekeeping. We'll create Python dictionaries to hold FPR ([false positive rate](https://en.wikipedia.org/wiki/False_positive_rate)), TPR ([true positive rate](https://en.wikipedia.org/wiki/Sensitivity_and_specificity)), and AUC ([area under the curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristicArea_under_the_curve)) values.
###Code
fpr = dict()
tpr = dict()
roc_auc = dict()
###Output
_____no_output_____
###Markdown
For each of our classes, we'll computer FPR, TPR, and AUC. We're also compute the [micro and macro averages](http://rushdishams.blogspot.com/2011/08/micro-and-macro-average-of-precision.html).
###Code
print("==> number of classes =", num_categories)
# calculate FPR, TPR, and ROC AUC for every class
for i in range(num_categories):
fpr[i], tpr[i], _ = roc_curve(test_Y_binarize[:, i], probs[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# calculate the micro average FPR, TPR, and ROC AUC (we'll calculate the macro average below)
fpr["micro"], tpr["micro"], _ = roc_curve(test_Y_binarize.ravel(), probs.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
###Output
_____no_output_____
###Markdown
Plotting the ROC Curves Phew! Lots of code below, but it's fairly straightforward and [adapted from an example in the scikit-learn documentation](http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.htmlmulticlass-settings). Before we plot though, we'll create a simple category lookup dictionary so we can label the classes with their actual names (not their category IDs).
###Code
labels_pdf = labels_cdf.to_pandas()
category_lookup = labels_pdf[['category','category_id']].drop_duplicates().set_index('category_id').T.to_dict()
# aggregate all of the false positive rates across all classes
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(num_categories)]))
# interpolate all of the ROC curves
mean_tpr = np.zeros_like(all_fpr)
for i in range(param['num_class']):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# average the TPR
mean_tpr /= num_categories
# compute the macro average FPR, TPR, and ROC AUC
fpr['macro'] = all_fpr
tpr['macro'] = mean_tpr
roc_auc['macro'] = auc(fpr['macro'], tpr['macro'])
# plot all of the ROC curves on a single plot (for comparison)
plt.figure(figsize=(9,9))
plt.plot(fpr['micro'], tpr['micro'],
label="micro-average ROC curve (area = {0:0.2f})"
"".format(roc_auc['micro']),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr['macro'], tpr['macro'],
label="macro-average ROC curve (area = {0:0.2f})"
"".format(roc_auc['macro']),
color='navy', linestyle=':', linewidth=4)
num_colors = param['num_class']
cm = plt.get_cmap('gist_rainbow')
colors = cycle([cm(1.*i/num_colors) for i in range(num_colors)])
lw = 2
for i, color in zip(range(param['num_class']), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label="ROC curve for "+category_lookup[i]['category']+" class (area = {1:0.2f})"
"".format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate", fontsize=12)
plt.ylabel("True Positive Rate", fontsize=12)
plt.title("ROC Curves for IoT Device Categories")
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
It's not a *terrible* plot, but it gets a little messy. We can also plot each class as its own subplot.First we make a few variables so we can control the layout.
###Code
total_subplots = num_categories
plot_grid_cols = 3
plot_grid_rows = total_subplots // plot_grid_cols
plot_grid_rows += total_subplots % plot_grid_cols
position_index = range(1, total_subplots+1)
###Output
_____no_output_____
###Markdown
Now we make the grid of plots.
###Code
plt.figure()
fig, axs = plt.subplots(plot_grid_rows, plot_grid_cols, sharex=True, sharey=True, figsize=(15,15))
lw = 2
plt_num = 0
for row in range(plot_grid_rows):
for col in range(plot_grid_cols):
if(plt_num <= 12):
axs[row,col].plot(fpr[plt_num], tpr[plt_num], lw=lw)
axs[row,col].set_title(category_lookup[plt_num]['category']+' Devices ROC Curve', fontsize=14)
axs[row,col].text(0.7, 0.1,"AUC = {:.4f}".format(roc_auc[plt_num]), size=11)
elif(plt_num == 13):
axs[row,col].plot(fpr['micro'], tpr['micro'], lw=lw)
axs[row,col].set_title("Micro Average ROC Curve", fontsize=14)
axs[row,col].text(0.7, 0.1,"AUC = {:.4f}".format(roc_auc['micro']), size=12)
elif(plt_num == 14):
axs[row,col].plot(fpr['macro'], tpr['macro'], lw=lw)
axs[row,col].set_title("Macro Average ROC Curve", fontsize=14)
axs[row,col].text(0.7, 0.1,"AUC = {:.4f}".format(roc_auc['macro']), size=12)
axs[row,col].set_xlabel('False Positive Rate', fontsize=10)
axs[row,col].set_ylabel('True Positive Rate', fontsize=10)
plt_num += 1
plt.xlim([-0.01, 1.0])
plt.ylim([0.0, 1.05])
plt.subplots_adjust(wspace=0.2, hspace=0.4)
plt.show()
###Output
_____no_output_____
###Markdown
Conclusions As we've shown, it's possible to get fairly decent multiclass classification results for IoT data using only basic features (bytes and packets) when aggregated. This isn't surprising, based on the fact that we used expert knowledge to assign category labels. In addition, the majority of the time, IoT devices are in a "steady state" (idle), and are not heavily influenced by human interaction. This lets us take larger samples (e.g., aggregate to longer time bins) while still maintaining decent classification performance. It should also be noted that this is a very clean dataset. The traffic is mainly IoT traffic (e.g., little traditional compute traffic), and there are no intentional abnormal activities injected (e.g., red teaming).We used Bro data, but it's also possible to use the raw PCAP data as input for classification. The preprocessing steps are more arduous than for flow data though. It'd be a great exercise... More to Explore: Possible Exercises (1) It may be useful to investigate other time binnings. Can you build another model that uses data binned to a different granularity (e.g., 5 minutes)?
###Code
# your work here
###Output
_____no_output_____
###Markdown
(2) We used the `sum` of bytes and packets for a device when aggregated to the hour. What about other ways to handle these quantitative features (e.g., average)? Would that improve the classification results?
###Code
# your work here
###Output
_____no_output_____
###Markdown
(3) We selected specific parameters for XGBoost. These could probably use a bit more thought. You can [read more about the parameters](https://xgboost.readthedocs.io/en/latest/parameter.html) and try adjusting them on our previous dataset.
###Code
# a reminder about our parameters
print(param)
# your work here
###Output
_____no_output_____
###Markdown
(4) There are additional features in the netflow data that we didn't use. Some other quantitative fields (e.g., duration) and categorical fields (e.g., protocol, service, ports) may be useful for classification. Build another XGBoost model using some/all of these fields.
###Code
# your work here
###Output
_____no_output_____ |
notebooks/ensemble/ensemble_fc_predictions_v2-with-laugh-with-TEST.ipynb | ###Markdown
Video Sentiment Analysis in the WildEnsembling Notebook | FC | CS231n Modalities Used- Scene (ResNet - three LSTM heads) [model](https://storage.googleapis.com/cs231n-emotiw/models/scene-classifier-resnet-lstm-x3.h5)- Pose [model](https://storage.googleapis.com/cs231n-emotiw/models/pose-classifier-64lstm-0.01reg.h5)- Audio [model](https://storage.googleapis.com/cs231n-emotiw/models/openl3-cnn-lstm-tuned-lr.h5)- Image Captioning [model](https://storage.googleapis.com/cs231n-emotiw/models/sentiment-transformer_sentiment-transformer_756.pth) and [model metadata](https://storage.googleapis.com/cs231n-emotiw/models/sentiment-transformer-16.metadata.bin)- Laugh [notebook](https://github.com/kevincong95/cs231n-emotiw/blob/master/notebooks/laugh_detection/laugh-vggish.ipynb)Test Accuracy = **61.8%**FC Model [model](https://storage.googleapis.com/cs231n-emotiw/models/ensemble-fc-laugh-final-v2.h5) Copy Pre-Processed Files
###Code
!ls
!nvidia-smi
from google.colab import drive
drive.mount('/content/drive')
# FULL_PATH = 'My Drive/Machine-Learning-Projects/cs231n-project/datasets/emotiw'
FULL_PATH = 'My Drive/cs231n-project/datasets/emotiw'
!cp /content/drive/'$FULL_PATH'/test-final-* .
!cp /content/drive/'My Drive'/cs231n-project/datasets/emotiw/Test_labels.txt .
!ls
# RUN THIS FOR FINAL FILES (zip includes root folder)
!unzip -d test-final-audio test-final-audio.zip
!unzip -d test-final-faces test-final-faces.zip
!unzip -d test-final-frames test-final-frames.zip
!unzip -d test-final-pose test-final-pose.zip
!unzip -d test-final-fer test-final-fer.zip
!ls |head
###Output
drive
sample_data
test-final-audio
test-final-audio.zip
test-final-faces
test-final-faces.zip
test-final-fer
test-final-fer.zip
test-final-frames
test-final-frames.zip
###Markdown
Run Classifiers
###Code
%tensorflow_version 2.x
import tensorflow
print(tensorflow.__version__)
!pwd
import urllib
from getpass import getpass
import os
user = input('User name: ')
password = getpass('Password: ')
password = urllib.parse.quote(password) # your password is converted into url format
cmd_string = 'git clone https://{0}:{1}@github.com/kevincong95/cs231n-emotiw.git'.format(user, password)
os.system(cmd_string)
cmd_string, password = "", "" # removing the password from the variable
!mv test-* cs231n-emotiw
!mv Test* cs231n-emotiw
!pwd
import os
os.chdir('/content/cs231n-emotiw')
!pwd
!pip install pytorch-transformers
# Create the concatenated input layer to feed into FC
from src.classifiers.audio_classifier import AudioClassifier
from src.classifiers.frames_classifier import FramesClassifier
from src.classifiers.pose_classifier import PoseClassifier
from src.classifiers.face_classifier import FaceClassifier
from src.classifiers.image_captioning_classifier import ImageCaptioningClassifier, FineTuningConfig
from src.classifiers.utils import get_num_samples
import numpy as np
def run_classifier(layers_to_extract, audio_folder='train-final-audio', frames_folder='train-final-frames', pose_folder='train-final-pose', face_folder='train-final-fer', image_caption_pkl="train-final-captions.pkl", image_caption_prefix="train_", labels_file="Train_labels.txt"):
audio_classifier = AudioClassifier(audio_folder, model_location='https://storage.googleapis.com/cs231n-emotiw/models/openl3-cnn-lstm-tuned-lr.h5', is_test=False)
frames_classifier = FramesClassifier(frames_folder, model_location='https://storage.googleapis.com/cs231n-emotiw/models/scene-classifier-resnet-lstm-x3.h5', is_test=False)
frames_classifier_vgg = FramesClassifier(frames_folder, location_prefix="vgg", model_location='https://storage.googleapis.com/cs231n-emotiw/models/vgg19-lstm-cp-0003.h5', is_test=False, batch_size=4)
pose_classifier = PoseClassifier(pose_folder, model_location='https://storage.googleapis.com/cs231n-emotiw/models/pose-classifier-64lstm-0.01reg.h5', is_test=False)
face_classifier = FaceClassifier(face_folder, model_location='/content/drive/My Drive/cs231n-project/models/face-classifier-playground/cp-0001.h5', is_test=False)
image_captioning_classifier = ImageCaptioningClassifier(image_caption_pkl, image_caption_prefix, model_metadata_location="https://storage.googleapis.com/cs231n-emotiw/models/sentiment-transformer-16.metadata.bin", model_location='https://storage.googleapis.com/cs231n-emotiw/models/sentiment-transformer_sentiment-transformer_756.pth', is_test=False)
# classifiers = [audio_classifier, frames_classifier, pose_classifier] # face_classifier]
classifiers = [audio_classifier, frames_classifier, frames_classifier_vgg, pose_classifier, face_classifier, image_captioning_classifier]
# classifiers = [frames_classifier_vgg]
sample_to_true_label = {}
with open(labels_file) as f:
l = 0
for line in f:
if l == 0:
# Skip headers
l += 1
continue
line_arr = line.split(" ")
sample_to_true_label[line_arr[0].strip()] = int(line_arr[1].strip()) - 1 # subtract one to make labels from 0 to 2
l += 1
classifier_outputs = []
classifier_samples = []
classifier_dim_sizes = []
output_dim_size = 0
num_samples = 0
sample_to_row = {}
for i, classifier in enumerate(classifiers):
output, samples = classifier.predict(layers_to_extract[i])
output_dim_size += output.shape[1]
classifier_dim_sizes.append(output.shape[1])
num_samples = len(samples)
classifier_outputs.append(output)
classifier_samples.append(samples)
X_train = np.zeros(shape=(num_samples, output_dim_size))
y_train = []
print(f"Number of samples: {num_samples}")
print(f"Dim shapes: ")
print(classifier_dim_sizes)
for i, sample in enumerate(classifier_samples[0]):
sample_to_row[sample] = i
y_train.append(sample_to_true_label[sample])
last_classifier_index = 0
for c, output in enumerate(classifier_outputs):
samples = classifier_samples[c]
print(len(output))
for i, row in enumerate(output):
sample = samples[i]
X_train[sample_to_row[sample], last_classifier_index:last_classifier_index+classifier_dim_sizes[c]] += row
last_classifier_index += classifier_dim_sizes[c]
return X_train, tf.keras.utils.to_categorical(y_train, num_classes=3)
import tensorflow as tf
# For each classifier, extract the specific desired layer
# (refer to the model summary for the layer names)
layers_to_extract = [
"dense", # Audio
"concatenate_5", # ResNet
"global_average_pooling3d_1", # VGG
"bidirectional_1", # Pose
"dense_27", # FER
"classification_head" # Image Caption
]
prefix = "final"
X_test, y_test = run_classifier(layers_to_extract, audio_folder=f"test-{prefix}-audio", frames_folder=f"test-{prefix}-frames", pose_folder=f"test-{prefix}-pose" , face_folder=f"test-{prefix}-fer" , image_caption_pkl="test-final-captions.pkl", image_caption_prefix="test_", labels_file="Test_labels.txt")
print(X_test.shape)
print(y_test.shape)
!rm -rf ensemble-scene-scene-pose-audio-face-caption-v1-test
!mkdir ensemble-scene-scene-pose-audio-face-caption-v1-test
np.save("ensemble-scene-scene-pose-audio-face-caption-v1-test/X_test.npy", X_test)
np.save("ensemble-scene-scene-pose-audio-face-caption-v1-test/y_test.npy", y_test)
!zip -r ensemble-scene-scene-pose-audio-face-caption-v1-test.zip ensemble-scene-scene-pose-audio-face-caption-v1-test
!cp ensemble-scene-scene-pose-audio-face-caption-v1-test.zip ../drive/'My Drive'/cs231n-project/datasets/emotiw
###Output
adding: ensemble-scene-scene-pose-audio-face-caption-v1-test/ (stored 0%)
adding: ensemble-scene-scene-pose-audio-face-caption-v1-test/X_test.npy (deflated 48%)
adding: ensemble-scene-scene-pose-audio-face-caption-v1-test/y_test.npy (deflated 94%)
###Markdown
Start here to load files written to disk
###Code
from google.colab import drive
drive.mount('/content/drive')
!cp /content/drive/'My Drive/Machine-Learning-Projects'/cs231n-project/datasets/emotiw/test-final-laugh-prob.pkl .
!cp /content/drive/'My Drive/Machine-Learning-Projects'/cs231n-project/datasets/emotiw/ensemble-scene-scene-pose-audio-face-caption-v1-test.zip .
!unzip ensemble-scene-scene-pose-audio-face-caption-v1-test.zip
import numpy as np
X_test = np.load("ensemble-scene-scene-pose-audio-face-caption-v1-test/X_test.npy")
y_test = np.load("ensemble-scene-scene-pose-audio-face-caption-v1-test/y_test.npy")
X_test.shape
## Get the laughs
import pickle
test_vid_to_laugh = {}
test_laugh_vec = []
with open('test-final-laugh-prob.pkl', 'rb') as handle:
test_laugh_obj = pickle.load(handle)
i = 0
for vid in test_laugh_obj["vids"]:
test_vid_to_laugh[vid] = test_laugh_obj["actual_preds"][i]
i += 1
for vid in sorted(test_laugh_obj["vids"]):
test_laugh_vec.append(test_vid_to_laugh[vid])
print(len(test_laugh_vec))
# Adding laughter probability as an additional dimension
test_laugh_vec = np.expand_dims(test_laugh_vec, 1)
X_test = np.hstack((X_test, test_laugh_vec))
X_test.shape
import tensorflow as tf
MODEL_PATH = '/content/drive/My Drive/Machine-Learning-Projects/cs231n-project/models/ensemble-scene-scene-pose-audio-face-caption-laugh-v1/cp-0036.h5'
model = tf.keras.models.load_model(MODEL_PATH)
model.summary()
#
# CONFIGURATION
#
# Define any constants for the model here
#
from pathlib import Path
import tensorflow as tf
import matplotlib.pyplot as plt
sizes = [32, 30, 40, 128, 8, 16, 1]
# UNCOMMENT IF EXCLUDING FER, AND RESNET [best] ************
mask = []
for x in range(sum(sizes)):
if x >= 62 and x < 102:
mask.append(False)
elif x < 230:
mask.append(True)
elif x >= 238:
mask.append(True)
else:
mask.append(False)
import pickle
history = model.evaluate(
x=X_test[:, mask],
y=y_test
)
import pickle
predictions = model.predict(
x=X_test[:, mask]
)
len(y_test)
len(predictions)
y_true_final = np.argmax(y_test, axis=1)
y_pred_final = np.argmax(predictions, axis=1)
from sklearn import metrics
import pandas as pd
import seaborn as sn
cm=metrics.confusion_matrix(y_true_final,y_pred_final)
import matplotlib.pyplot as plt
classes=['Pos' , 'Neu' , 'Neg']
con_mat = tf.math.confusion_matrix(labels=y_true_final, predictions=y_pred_final).numpy()
con_mat_norm = np.around(con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis], decimals=2)
con_mat_df = pd.DataFrame(con_mat_norm,
index = classes,
columns = classes)
figure = plt.figure(figsize=(11, 9))
plt.title("FC Voting Confusion Matrix with Scene & Audio")
sn.heatmap(con_mat_df, annot=True,cmap=plt.cm.Blues)
accuracy = (y_pred_final == y_true_final).mean()
print(f"Accuracy: {accuracy}")
y_pred_final
###Output
_____no_output_____ |
DCTW train [MMI].ipynb | ###Markdown
Load data
###Code
data, gnd = mio.import_pickle(cached_data_path)
###Output
_____no_output_____
###Markdown
Network
###Code
source_images = tf.placeholder(tf.float32, shape=(None, 40, 40, 1))
target_images = tf.placeholder(tf.float32, shape=(None, 40, 40, 1))
def network(images):
with slim.arg_scope([slim.conv2d, slim.fully_connected], normalizer_fn=slim.batch_norm, outputs_collections='output'):
net = slim.conv2d(images, 32, 3) # 40x40
net = slim.max_pool2d(net, 2) # 20x20
net = slim.conv2d(net, 32, 3)
net = slim.max_pool2d(net, 2) # 10x10
net = slim.flatten(net)
net = slim.fully_connected(net, 10, activation_fn=None)
return net
with tf.variable_scope('net', reuse=False):
source_proj = network(source_images)
with tf.variable_scope('net', reuse=True):
target_proj = network(target_images)
###Output
_____no_output_____
###Markdown
Define losses
###Code
cost = losses.correlation_cost(source_proj, target_proj, 0, 0)
opt = tf.train.AdamOptimizer(.0005)
train_op = opt.minimize(cost)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
from menpo.visualize import print_dynamic
def init_path(source, target):
Vs = [source, target]
P = np.zeros((max([x.shape[0] for x in Vs]), len(Vs)), int)
for i in range(P.shape[1]):
P[:, i] = np.linspace(0, Vs[i].shape[0] - 1, num=P.shape[0]).round()
return P
current_paths = {}
current_scores = {}
###Output
_____no_output_____
###Markdown
Training
###Code
# This are the samples used in the paper.
subs = [0, 1, 5, 6, 12, 16, 18, 19, 24, 25]
num_videos = len(subs)
num_epochs = 5
for epoch in range(num_epochs):
for idx, i in enumerate(subs):
for j in subs:
if j < i: continue
if (i, j) not in current_paths:
current_paths[(i, j)] = init_path(data[i], data[j])
path = current_paths[(i, j)].T
loss, _, fx, gx = sess.run((cost, train_op, source_proj, target_proj), feed_dict={
source_images: data[i].transpose(0, 2, 3, 1)[path[0]],
target_images: data[j].transpose(0, 2, 3, 1)[path[1]]
})
# Gets the unique observations.
fx = fx[np.unique(path[0], return_index=True)[1]].astype(np.float)
gx = gx[np.unique(path[1], return_index=True)[1]].astype(np.float)
# Solves for the warping.
path = dtw(fx, gx)
valuation = score(gnd[i][path[:, 0]], gnd[j][path[:, 1]])
current_scores[(i, j)] = valuation
current_paths[(i, j)] = path
mean_score = np.mean(list(current_scores.values()))
print_dynamic("Loss: {:3f}, Score: {:.2f} {}/{} -- Epoch {}/{}".format(
loss[0], mean_score, idx, num_videos, epoch, num_epochs))
# good ids: 0, 8, 18
video_id = 29
f = sess.run(tf.get_collection('output')[1], feed_dict={
source_images: data[video_id].transpose(0, 2, 3, 1),
})
%matplotlib inline
from menpo.image import Image
def merge_images(*images, group=None):
if len(images) == 1:
images = images[0]
image_widths = np.cumsum([0] + [im.shape[1] for im in images])
merged_im = Image(np.concatenate([im.pixels for im in images], 2))
return merged_im
list(Path('/vol/atlas/homes/gt108/db/MMI_smile/').glob('*frames'))[29]
frames = mio.import_images(list(Path('/vol/atlas/homes/gt108/db/MMI_smile/').glob('*frames'))[video_id])
frames = sorted(frames, key=lambda a: int(a.path.stem.split('_')[1]))
images = [frames[i].crop_to_landmarks().resize((100, 100)) for i in np.linspace(0, len(data[video_id])-1, 7).round().astype(int)]
# images = [Image(data[video_id][i, 0]) for i in np.linspace(0, len(data[video_id])-1, 5).round()]
# features = [Image(f[i, ..., 23]).resize((40, 40)) for i in np.linspace(0, len(data[video_id])-1, 7).round()]
features = [Image(f[i].mean(-1)).resize((40, 40)) for i in np.linspace(0, len(data[video_id])-1, 7).round()]
merge_images(*images).view()
merge_images(*features).view(new_figure=True)
import matplotlib.pyplot as plt
def tt(x):
x = x.copy()
x[x==3] = 1
return x
plt.figure(figsize=(8, 3))
plt.plot(tt(gnd[video_id]), linewidth=3)
plt.ylabel('Temporal phase')
plt.ylim([0, 2.2])
plt.grid()
plt.yticks(np.arange(3), np.array(['neutral', 'offset\nonset', 'apex']))
plt.xlabel('Frames')
###Output
_____no_output_____ |
Section 19/Quora Project/Teclov3_W2V.ipynb | ###Markdown
Featurizing text data with tfidf weighted word-vectors
###Code
import pandas as pd
import matplotlib.pyplot as plt
import re
import time
import warnings
import numpy as np
from nltk.corpus import stopwords
from sklearn.preprocessing import normalize
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
warnings.filterwarnings("ignore")
import sys
import os
import pandas as pd
import numpy as np
from tqdm import tqdm
# exctract word2vec vectors
# https://github.com/explosion/spaCy/issues/1721
# http://landinghub.visualstudio.com/visual-cpp-build-tools
import spacy
# avoid decoding problems
df = pd.read_csv("train.csv")
# encode questions to unicode
# https://stackoverflow.com/a/6812069
# ----------------- python 2 ---------------------
# df['question1'] = df['question1'].apply(lambda x: unicode(str(x),"utf-8"))
# df['question2'] = df['question2'].apply(lambda x: unicode(str(x),"utf-8"))
# ----------------- python 3 ---------------------
df['question1'] = df['question1'].apply(lambda x: str(x))
df['question2'] = df['question2'].apply(lambda x: str(x))
df.head()
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
# merge texts
questions = list(df['question1']) + list(df['question2'])
tfidf = TfidfVectorizer(lowercase=False, )
tfidf.fit_transform(questions)
# dict key:word and value:tf-idf score
word2tfidf = dict(zip(tfidf.get_feature_names(), tfidf.idf_))
###Output
_____no_output_____
###Markdown
- After we find TF-IDF scores, we convert each question to a weighted average of word2vec vectors by these scores.- here we use a pre-trained GLOVE model which comes free with "Spacy". https://spacy.io/usage/vectors-similarity- It is trained on Wikipedia and therefore, it is stronger in terms of word semantics.
###Code
# en_vectors_web_lg, which includes over 1 million unique vectors.
nlp = spacy.load('en_core_web_sm')
vecs1 = []
# https://github.com/noamraph/tqdm
# tqdm is used to print the progress bar
for qu1 in tqdm(list(df['question1'])):
doc1 = nlp(qu1)
# 384 is the number of dimensions of vectors
mean_vec1 = np.zeros([len(doc1), 384])
for word1 in doc1:
# word2vec
vec1 = word1.vector
# fetch df score
try:
idf = word2tfidf[str(word1)]
except:
idf = 0
# compute final vec
mean_vec1 += vec1 * idf
mean_vec1 = mean_vec1.mean(axis=0)
vecs1.append(mean_vec1)
df['q1_feats_m'] = list(vecs1)
vecs2 = []
for qu2 in tqdm(list(df['question2'])):
doc2 = nlp(qu2)
mean_vec2 = np.zeros([len(doc2), 384])
for word2 in doc2:
# word2vec
vec2 = word2.vector
# fetch df score
try:
idf = word2tfidf[str(word2)]
except:
#print word
idf = 0
# compute final vec
mean_vec2 += vec2 * idf
mean_vec2 = mean_vec2.mean(axis=0)
vecs2.append(mean_vec2)
df['q2_feats_m'] = list(vecs2)
#prepro_features_train.csv (Simple Preprocessing Feartures)
#nlp_features_train.csv (NLP Features)
if os.path.isfile('nlp_features_train.csv'):
dfnlp = pd.read_csv("nlp_features_train.csv",encoding='latin-1')
else:
print("download nlp_features_train.csv from drive or run previous notebook")
if os.path.isfile('df_fe_without_preprocessing_train.csv'):
dfppro = pd.read_csv("df_fe_without_preprocessing_train.csv",encoding='latin-1')
else:
print("download df_fe_without_preprocessing_train.csv from drive or run previous notebook")
df1 = dfnlp.drop(['qid1','qid2','question1','question2'],axis=1)
df2 = dfppro.drop(['qid1','qid2','question1','question2','is_duplicate'],axis=1)
df3 = df.drop(['qid1','qid2','question1','question2','is_duplicate'],axis=1)
df3_q1 = pd.DataFrame(df3.q1_feats_m.values.tolist(), index= df3.index)
df3_q2 = pd.DataFrame(df3.q2_feats_m.values.tolist(), index= df3.index)
# dataframe of nlp features
df1.head()
# data before preprocessing
df2.head()
# Questions 1 tfidf weighted word2vec
df3_q1.head()
# Questions 2 tfidf weighted word2vec
df3_q2.head()
print("Number of features in nlp dataframe :", df1.shape[1])
print("Number of features in preprocessed dataframe :", df2.shape[1])
print("Number of features in question1 w2v dataframe :", df3_q1.shape[1])
print("Number of features in question2 w2v dataframe :", df3_q2.shape[1])
print("Number of features in final dataframe :", df1.shape[1]+df2.shape[1]+df3_q1.shape[1]+df3_q2.shape[1])
# storing the final features to csv file
if not os.path.isfile('final_features.csv'):
df3_q1['id']=df1['id']
df3_q2['id']=df1['id']
df1 = df1.merge(df2, on='id',how='left')
df2 = df3_q1.merge(df3_q2, on='id',how='left')
result = df1.merge(df2, on='id',how='left')
result.to_csv('final_features.csv')
###Output
_____no_output_____ |
paper_retrieval/evaluation_notebooks/bm25_evaluation.ipynb | ###Markdown
BM25 Evaluation This Notebook evaluates the BM25 information retrieval method for retrieving relevant papers given an input query.The BM25 model worked best for specific queries but had problems dealing with generic ones.The best model is found using a grid search and uses ontology query expansion. Table Of Contents* [Load corpus using different preprocessing pipelines](Load-corpus-using-different-preprocessing-pipelines)* [Load keywords to use as test data](Load-keywords-to-use-as-test-data)* [Grid search for BM25 k1 parameter](Grid-search-for-BM25-k1-parameter)* [Grid search for BM25 b parameter](Grid-search-for-BM25-b-parameter)* [Test bm25 with best parameters on n-grams](Test-bm25-with-best-parameters-on-n-grams)* [Visualize pseudo relevance feedback](Visualize-pseudo-relevance-feedback)* [Evaluate pseudo relevance feedback](Evaluate-pseudo-relevance-feedback)* [Ontology Query Expansion](Ontology-Query-Expansion)
###Code
# Imports
import json
import sys
import os
import logging
import pickle
import random
logging.basicConfig(level=logging.INFO, stream=sys.stdout)
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.expand_frame_repr', False)
pd.set_option('max_colwidth', 1000)
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="whitegrid")
from wordcloud import WordCloud
from tqdm.notebook import tqdm
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
tqdm.pandas()
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
from evaluation import *
from preprocessing import Corpus, BasicPreprocessing, BigramPreprocessor, SpacyPreprocessor, StopWordPreprocessor
from retrieval_algorithms import BM25RetrievalAlgorithm
from retrieval_algorithms.prf_wrapper import PRFWrapper
from retrieval_algorithms.ontology_expansion_wrapper import OntologyExpansionWrapper
###Output
_____no_output_____
###Markdown
Load corpus using different preprocessing pipelines
###Code
base_file = "../../data/kit_expert_2019_all_papers.csv"
# Use remove stopwords
p = [BasicPreprocessing(), StopWordPreprocessor()]
papers_basic = Corpus(base_file, p)
# Remove stopwords and apply lemmatization to all words
p = [BasicPreprocessing(), StopWordPreprocessor(), SpacyPreprocessor(lemmatization="all")]
papers_basic_lemmatization_all = Corpus(base_file, p, load_from_cache=True, n_jobs=16)
# Remove stopwords and apply lemmatization only to nouns
p = [BasicPreprocessing(), StopWordPreprocessor(), SpacyPreprocessor(lemmatization="nouns")]
papers_basic_lemmatization_nouns = Corpus(base_file, p, load_from_cache=True, n_jobs=16)
###Output
INFO:preprocessing.pipeline:Start preprocessing pipeline "basic_NoStopWords" for file ../../data/kit_expert_2019_all_papers.csv.
INFO:preprocessing.pipeline:Loaded cached preprocessed corpus from ../../data/kit_expert_2019_all_papers_basic_NoStopWords
INFO:preprocessing.pipeline:Start preprocessing pipeline "basic_NoStopWords_spacy_lemmatization_all" for file ../../data/kit_expert_2019_all_papers.csv.
INFO:preprocessing.pipeline:Loaded cached preprocessed corpus from ../../data/kit_expert_2019_all_papers_basic_NoStopWords_spacy_lemmatization_all
INFO:preprocessing.pipeline:Start preprocessing pipeline "basic_NoStopWords_spacy_lemmatization_nouns" for file ../../data/kit_expert_2019_all_papers.csv.
INFO:preprocessing.pipeline:Loaded cached preprocessed corpus from ../../data/kit_expert_2019_all_papers_basic_NoStopWords_spacy_lemmatization_nouns
###Markdown
Load keywords to use as test data
###Code
with open("../../data/kit_expert_2019_all_keywords.json", "r") as file:
keywords = json.load(file)
# Split test queries into general and specific queries
general_keywords = [k for k in keywords if k["level"]<=1]
specific_keywords = [k for k in keywords if k["level"]>=2 and len(k["paper_ids"])>=10]
# Split dataset into train and test
general_keywords_val = ("general keywords validation", general_keywords[0:int(len(general_keywords)*0.8)])
specific_keywords_val = ("specific keywords validation", specific_keywords[0:int(len(specific_keywords)*0.8)])
general_keywords_test = ("general keywords test", general_keywords[int(len(general_keywords)*0.8):])
specific_keywords_test = ("specific keywords test", specific_keywords[int(len(specific_keywords)*0.8):])
###Output
_____no_output_____
###Markdown
Grid search for BM25 k1 parameter The k1 parameter of the BM25 model defines how fast a query term is saturated. If a query term is saturated, it means that further occurences of the term do not result in an improved score. The b parameter controlls the amount of length normalization applied. Length normalization means that the score is divided by the number of words in the document.First the best k1 parameter of the BM25 model will be searched and then the best b parameter.It would be best to search for both at the same time, but this requires many more evaluations.
###Code
# Define grid
k1_grid = np.arange(0.1,2.1,0.1)
search_k1_bm25_models = [(f"BM25 k1={k1:.2f}", BM25RetrievalAlgorithm(k1=k1, b=0.5), papers_basic) for k1 in k1_grid]
# Train model
search_k1_bm25_results = train_evaluate_models(search_k1_bm25_models, [general_keywords_val, specific_keywords_val], n_jobs=10)#len(search_k1_bm25_models))
# Save results to csv
search_k1_bm25_results.to_csv("../../data/results/search_k1_bm25_results.csv")
# Plot results
plot_data = search_k1_bm25_results["specific keywords validation"]["mAP"]["avg"]
err_data = search_k1_bm25_results["specific keywords validation"]["mAP"]["err"]
plot_data.index = k1_grid
ax = plot_data.plot(label="specific queries", figsize=(12,5), style="-bo", legend=False, xticks=[0.1]+list(np.arange(0,2.2,0.2)), xlim=(0.1,2.), ylim=(0.44,0.48))
ax.set_ylabel("mAP");
ax.legend(loc="upper right")
plt.savefig("images/bm25_k1_search.pdf", transparent=True, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Results:- k1 of 0.4 has the best performance Grid search for BM25 b parameter
###Code
# Define grid
b_grid = np.arange(0.0,1.1,0.1)
search_b_bm25_models = [(f"BM25 b={b:.2f}", BM25RetrievalAlgorithm(b=b, k1=0.4), papers_basic) for b in b_grid]
# Train models
search_b_bm25_results = train_evaluate_models(search_b_bm25_models, [general_keywords_val, specific_keywords_val], n_jobs=len(search_b_bm25_models))
# Save results to csv
search_b_bm25_results.to_csv("../../data/results/search_b_bm25_results.csv")
# Plot results
plot_data = search_b_bm25_results["specific keywords validation"]["mAP"]["avg"]
err_data = search_b_bm25_results["specific keywords validation"]["mAP"]["err"]
plot_data.index = b_grid
ax = plot_data.plot(label="specific queries", figsize=(12,5), style="-bo", legend=False, xticks=np.arange(0,1.1,0.1), xlim=(0,1.0), ylim=(0.44,0.48))
ax.set_ylabel("mAP");
ax.legend(loc="center right")
plt.savefig("images/bm25_b_search.pdf", transparent=True, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Results:- b parameter does not have a great effect on model performance- best value at 0.8 Test bm25 with best parameters on n-grams Until now only unigrams have been considered which mean the order of the words in the document is irrelevant.When using higher order n-grams this order is taken into account, which can improve performance.
###Code
ngram_bm25_models = [
("bm25 1-gram", BM25RetrievalAlgorithm(max_ngram=1, k1=0.4, b=0.8), papers_basic_lemmatization_nouns),
("bm25 2-gram", BM25RetrievalAlgorithm(max_ngram=2, k1=0.4, b=0.8), papers_basic_lemmatization_nouns),
("bm25 3-gram", BM25RetrievalAlgorithm(max_ngram=3, k1=0.4, b=0.8), papers_basic_lemmatization_nouns),
("bm25 4-gram", BM25RetrievalAlgorithm(max_ngram=4, k1=0.4, b=0.8), papers_basic_lemmatization_nouns),
]
ngram_bm25_results = train_evaluate_models(ngram_bm25_models, [general_keywords_val, specific_keywords_val], n_jobs=4)
ngram_bm25_results.to_csv("../../data/results/ngram_bm25_results.csv")
ngram_bm25_results
###Output
_____no_output_____
###Markdown
Results:- bigrams improve result of bm25- bm25 bigram model achieves best score on specific keywords- general keyword score still very low Visualize pseudo relevance feedback Pseudo relevance feedback first queries the relevant documents using another retrieval methods. Then it extracts additional query terms, creates an expanded query and does a second retrieval of relevant documents with this expanded query. The chosen expansion queries will be visualized in the following code:
###Code
best_bm25_model = BM25RetrievalAlgorithm(max_ngram=2, k1=0.4, b=0.8)
best_bm25_model.prepare(papers_basic_lemmatization_nouns)
prf = PRFWrapper(best_bm25_model, 150, 30, 0.5, 1)
prf.prepare(papers_basic_lemmatization_nouns)
prf.num_expansion_terms=30
# prf.num_relevant_docs=150
def grey_color_func(word, font_size, position, orientation, random_state=None,
**kwargs):
return "hsl(0, 0%%, %d%%)" % ((1-(font_size/130))*40+5)
wordcloud = WordCloud(
width=1300,
height=400,
background_color="white",
color_func=grey_color_func,
max_font_size=120,
min_font_size=20,
random_state=3,
margin=5
).fit_words(prf.get_expansion_terms("machine learning"))
plt.figure(figsize = (10,9))
plt.imshow(wordcloud, interpolation="bilinear")
plt.axis("off");
wordcloud.to_file("images/bm25_prf_machinelearning_wordcloud.png");
wordcloud = WordCloud(
width=1300,
height=400,
background_color="white",
color_func=grey_color_func,
max_font_size=120,
min_font_size=20,
random_state=2,
margin=5
# colormap="Paired"
).fit_words(prf.get_expansion_terms("neural network"))
plt.figure(figsize = (10,9))
plt.imshow(wordcloud, interpolation="bilinear")
plt.axis("off");
wordcloud.to_file("images/bm25_prf_neuralnetwork_wordcloud.png");
wordcloud = WordCloud(
width=1300,
height=400,
background_color="white",
color_func=grey_color_func,
max_font_size=120,
min_font_size=20
# colormap="Paired"
).fit_words(prf.get_expansion_terms("generative adversarial network"))
plt.figure(figsize = (10,9))
plt.imshow(wordcloud, interpolation="bilinear")
plt.axis("off");
wordcloud.to_file("images/bm25_prf_generativeadversarialnetwork_wordcloud.png");
###Output
_____no_output_____
###Markdown
Results:- For very generic terms the expanded contain some good terms but also wrong expansion terms- For very specific terms the expanded terms are also too generic- For terms inbetween the expansion terms provide good additional information Evaluate pseudo relevance feedback
###Code
prf_grid = [(150,200,np.round(i,3)) for i in np.linspace(0,1,21)]
search_prf_models = [(f"prf nrd={nrd:.2f} net={net:.2f} ew={ew:.2f}", PRFWrapper(best_bm25_model, nrd, net, ew), papers_basic_lemmatization_nouns) for nrd, net, ew in prf_grid]
search_prf_results = train_evaluate_models(search_prf_models, [general_keywords_val, specific_keywords_val], n_jobs=12)
search_prf_results.to_csv("../../data/results/bm25_search_prf_results.csv")
search_prf_results = pd.read_csv("../../data/results/bm25_search_prf_results.csv", index_col=0, header=[0,1,2])
plot_data = search_prf_results.xs('mAP', level=1, axis=1).xs('avg', level=1, axis=1)
err_data = search_prf_results.xs('mAP', level=1, axis=1).xs('err', level=1, axis=1)
plot_data.index = np.linspace(0,1,21)
ax = plot_data.iloc[:,1].plot(label="specific queries", figsize=(12,5), style="-o", legend=True, xticks=[0]+np.linspace(0,1,11), xlim=(0,1), yticks=np.arange(0,1,0.1), ylim=(0.0,0.6))
ax = plot_data.iloc[:,0].plot(label="general queries", figsize=(12,5), style="-o", legend=True, xticks=[0]+np.linspace(0,1,11), xlim=(0,1), yticks=np.arange(0,1,0.1), ylim=(0.0,0.6))
ax.set_ylabel("mAP");
ax.set_xlabel("weight λ")
# plt.fill_between(plot_data.index, plot_data.iloc[:,1].values-err_data.iloc[:,1].values, plot_data.iloc[:,1].values+err_data.iloc[:,1].values,
# alpha=0.4, edgecolor=sns.color_palette("Blues")[3], facecolor=sns.color_palette("Blues")[1], linewidth=1)
# plt.fill_between(plot_data.index, plot_data.iloc[:,0].values-err_data.iloc[:,0].values, plot_data.iloc[:,0].values+err_data.iloc[:,0].values,
# alpha=0.4, edgecolor=sns.color_palette("Oranges")[3], facecolor=sns.color_palette("Oranges")[1], linewidth=1)
plt.savefig("images/bm25_prf.pdf", transparent=True, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Ontology Query Expansion
###Code
with open("../../data/keyword_hierarchy.json", 'r') as file:
keyword_hierarchy = json.load(file)
oqe_grid = [np.round(i,3) for i in np.linspace(0,1,21)]
search_oqe_models = [(f"ontology expansion wrapper w={ew}", OntologyExpansionWrapper(best_bm25_model, keyword_hierarchy, True, ew), papers_basic_lemmatization_nouns) for ew in oqe_grid]
search_oqe_results = train_evaluate_models(search_oqe_models, [general_keywords_val, specific_keywords_val], n_jobs=5)
search_oqe_results.to_csv("../../data/results/bm25_search_oqe_results.csv")
search_oqe_results = pd.read_csv("../../data/results/bm25_search_oqe_results.csv", index_col=0, header=[0,1,2])
plot_data = search_oqe_results.xs('mAP', level=1, axis=1).xs('avg', level=1, axis=1)
err_data = search_oqe_results.xs('mAP', level=1, axis=1).xs('err', level=1, axis=1)
plot_data.index = np.linspace(0,1,21)
ax = plot_data.iloc[:,1].plot(label="specific queries", figsize=(12,5), style="-o", legend=True, xticks=[0]+np.linspace(0,1,11), xlim=(0,1), ylim=(0.0,0.6))
ax = plot_data.iloc[:,0].plot(label="general queries", figsize=(12,5), style="-o", legend=True, xticks=[0]+np.linspace(0,1,11), xlim=(0,1), ylim=(0.0,0.6))
ax.set_ylabel("mAP");
ax.set_xlabel("weight λ")
ax.legend(loc="center right")
# plt.fill_between(plot_data.index, plot_data.iloc[:,1].values-err_data.iloc[:,1].values, plot_data.iloc[:,1].values+err_data.iloc[:,1].values,
# alpha=0.4, edgecolor=sns.color_palette("Blues")[3], facecolor=sns.color_palette("Blues")[1], linewidth=1)
# plt.fill_between(plot_data.index, plot_data.iloc[:,0].values-err_data.iloc[:,0].values, plot_data.iloc[:,0].values+err_data.iloc[:,0].values,
# alpha=0.4, edgecolor=sns.color_palette("Oranges")[3], facecolor=sns.color_palette("Oranges")[1], linewidth=1)
plt.savefig("images/bm25_oqe.pdf", transparent=True, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Result:The ontology query expansion with bm25 achieved the best score. The best model will be saved to disk and is used for the expert recommendation.
###Code
bm25_oqe_model = OntologyExpansionWrapper(best_bm25_model, keyword_hierarchy, True, 0.9)
bm25_oqe_model.prepare(papers_basic)
file_path = "../../data/models/tfidf/bm25_oqe.model"
with open(file_path, "wb") as file:
pickle.dump(bm25_oqe_model, file)
###Output
_____no_output_____ |
module4-roc-auc/roc_auc.ipynb | ###Markdown
_Lambda School Data Science, Unit 2 — Predictive Modeling_ ROC AUC Objectives - Understand why accuracy is a misleading metric when classes are imbalanced- Use classification metric: ROC AUC- Visualize the ROC curve by plotting true positive rate vs false positive rate at varying thresholds- Use the class_weight parameter in scikit-learn Libraries category_encoders- Local Anaconda: `conda install -c conda-forge category_encoders`- Google Colab: `pip install category_encoders` [mlxtend](http://rasbt.github.io/mlxtend/) (to plot decision regions)- Local Anaconda: `conda install -c conda-forge mlxtend`- Google Colab: Already installed [tqdm](https://tqdm.github.io/) (for progress bars)- Local Anaconda: `conda install -c conda-forge tqdm`- Google Colab: Already installed Downgrade Matplotlib? Need version != 3.1.1Because of this issue: [sns.heatmap top and bottom boxes are cut off](https://github.com/mwaskom/seaborn/issues/1773)> This was a matplotlib regression introduced in 3.1.1 which has been fixed in 3.1.2 (still forthcoming). For now the fix is to downgrade matplotlib to a prior version.This _isn't_ required for your homework, but is required to run this notebook. `pip install matplotlib==3.1.0`
###Code
# !pip install category_encoders matplotlib==3.1.0
###Output
_____no_output_____
###Markdown
Lending Club 🏦This lecture uses Lending Club data, historical and current. Predict if peer-to-peer loans are charged off or fully paid. Decide which loans to invest in. Background[According to Wikipedia,](https://en.wikipedia.org/wiki/Lending_Club)> Lending Club is the world's largest peer-to-peer lending platform. Lending Club enables borrowers to create unsecured personal loans between \\$1,000 and \\$40,000. The standard loan period is three years. **Investors can search and browse the loan listings on Lending Club website and select loans that they want to invest in based on the information supplied about the borrower, amount of loan, loan grade, and loan purpose.** Investors make money from interest. Lending Club makes money by charging borrowers an origination fee and investors a service fee.Lending Club's article about [Benefits of diversification](https://www.lendingclub.com/investing/investor-education/benefits-of-diversification) explains,> **With the investment minimum of \\$1,000, you can get up to 40 Notes at \$25 each.**You can read more good context in [Data-Driven Investment Strategies for Peer-to-Peer Lending: A Case Study for Teaching Data Science](https://www.liebertpub.com/doi/full/10.1089/big.2018.0092):> Current refers to a loan that is still being reimbursed in a timely manner. Late corresponds to a loan on which a payment is between 16 and 120 days overdue. If the payment is delayed by more than 121 days, the loan is considered to be in Default. If LendingClub has decided that the loan will not be paid off, then it is given the status of Charged-Off.> These dynamics imply that 5 months after the term of each loan has ended, every loan ends in one of two LendingClub states—fully paid or charged-off. We call these two statuses fully paid and defaulted, respectively, and we refer to a loan that has reached one of these statuses as expired. **One way to simplify the problem is to consider only loans that have expired at the time of analysis.**> A significant portion (13.5%) of loans ended in Default status; depending on how much of the loan was paid back, these loansmight have resulted in a significant loss to investors who had invested in them. The remainder was Fully Paid—the borrower fully reimbursed the loan’s outstanding balance with interest, and the investor earned a positive return on his or her investment. Therefore, to avoid unsuccessful investments, **our goal is to estimate which loans are more likely to default and which will yield low returns.**
###Code
import pandas as pd
pd.options.display.max_columns = 200
pd.options.display.max_rows = 200
LOCAL = '../data/lendingclub/'
WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Tree-Ensembles/master/data/lendingclub/'
source = WEB
# Stratified sample, 10% of expired Lending Club loans, grades A-D
# Source: https://www.lendingclub.com/info/download-data.action
history = pd.read_csv(source + 'lending-club-subset.csv')
history['issue_d'] = pd.to_datetime(history['issue_d'], infer_datetime_format=True)
# Current loans available for manual investing, June 17, 2019
# Source: https://www.lendingclub.com/browse/browse.action
current = pd.read_csv(source + 'primaryMarketNotes_browseNotes_1-RETAIL.csv')
# Print the number of observations
print(f'{len(history)} historical loans')
print(f'{len(current)} current loans')
# Calculate percent of each loan repaid
history['percent_paid'] = history['total_pymnt'] / history['funded_amnt']
# See percent paid for charged off vs fully paid loans
history.groupby('loan_status')['percent_paid'].describe()
###Output
_____no_output_____
###Markdown
Begin with baselines: expected value of random decisions
###Code
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.stats import percentileofscore
import seaborn as sns
from tqdm import tnrange
def simulate(n=10000, grades=['A','B','C','D'],
start_date='2007-07-01',
end_date='2019-03-01'):
"""
What if you picked 40 random loans for $25 investments?
How much would you have been paid back?
Repeat the simulation many times, and plot the distribution
of probable outcomes.
This doesn't consider fees or "time value of money."
"""
N = 1500
condition = ((history['grade'].isin(grades)) &
(history['issue_d'] >= start_date) &
(history['issue_d'] <= end_date))
possible = history[condition]
simulations = []
for _ in tnrange(n):
picks = possible.sample(50).copy()
picks['paid'] = 30 * picks['percent_paid']
paid = picks['paid'].sum()
simulations.append(paid)
simulations = pd.Series(simulations)
sns.distplot(simulations)
plt.axvline(x=N)
percent = percentileofscore(simulations, 1500)
plt.title(f'{percent}% of simulations did not profit. {start_date}-{end_date}, {grades}')
plt.xlabel('Amount of money paid back')
plt.ylabel('% of outcomes')
simulate()
simulate(grades=['A'])
simulate(grades=['D'])
###Output
_____no_output_____
###Markdown
Wrangle data- Engineer date-based features- Remove features to avoid leakage- Do 3-way split, train/validate/test
###Code
history['earliest_cr_line'].sample(10)
# Engineer date-based features
# Transform earliest_cr_line to an integer:
# How many days the earliest credit line was open, before the loan was issued.
# For current loans available for manual investing, assume the loan will be issued today.
history['earliest_cr_line'] = pd.to_datetime(history['earliest_cr_line'], infer_datetime_format=True)
history['earliest_cr_line'] = history['issue_d'] - history['earliest_cr_line']
history['earliest_cr_line'] = history['earliest_cr_line'].dt.days
current['earliest_cr_line'] = pd.to_datetime(current['earliest_cr_line'], infer_datetime_format=True)
current['earliest_cr_line'] = pd.Timestamp.today() - current['earliest_cr_line']
current['earliest_cr_line'] = current['earliest_cr_line'].dt.days
# Transform earliest_cr_line for the secondary applicant
history['sec_app_earliest_cr_line'] = pd.to_datetime(history['sec_app_earliest_cr_line'], infer_datetime_format=True, errors='coerce')
history['sec_app_earliest_cr_line'] = history['issue_d'] - history['sec_app_earliest_cr_line']
history['sec_app_earliest_cr_line'] = history['sec_app_earliest_cr_line'].dt.days
current['sec_app_earliest_cr_line'] = pd.to_datetime(current['sec_app_earliest_cr_line'], infer_datetime_format=True, errors='coerce')
current['sec_app_earliest_cr_line'] = pd.Timestamp.today() - current['sec_app_earliest_cr_line']
current['sec_app_earliest_cr_line'] = current['sec_app_earliest_cr_line'].dt.days
# Engineer features for issue date year & month
history['issue_d_year'] = history['issue_d'].dt.year
history['issue_d_month'] = history['issue_d'].dt.month
current['issue_d_year'] = pd.Timestamp.today().year
current['issue_d_month'] = pd.Timestamp.today().month
# Use Python sets to compare the historical columns & current columns
common_columns = set(history.columns) & set(current.columns)
just_history = set(history.columns) - set(current.columns)
just_current = set(current.columns) - set(history.columns)
# Train on the historical data.
# For features, use only the common columns shared by the historical & current data.
# For the target, use `loan_status` ('Fully Paid' or 'Charged Off')
features = list(common_columns)
target = 'loan_status'
X = history[features]
y = history[target]
y.sample(5)
# Do train/validate/test 3-way split
from sklearn.model_selection import train_test_split
X_trainval, X_test, y_trainval, y_test = train_test_split(
X,
y,
test_size=20000,
stratify=y,
random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_trainval,
y_trainval,
test_size=20000,
stratify=y_trainval,
random_state=42)
print('X_train shape', X_train.shape)
print('y_train shape', y_train.shape)
print('X_val shape', X_val.shape)
print('y_val shape', y_val.shape)
print('X_test shape', X_test.shape)
print('y_test shape', y_test.shape)
###Output
X_train shape (88334, 108)
y_train shape (88334,)
X_val shape (20000, 108)
y_val shape (20000,)
X_test shape (20000, 108)
y_test shape (20000,)
###Markdown
Understand why accuracy is a misleading metric when classes are imbalanced Get accuracy score for majority class baseline
###Code
y_train.value_counts(normalize=True)
import numpy as np
from sklearn.metrics import accuracy_score
majority_class = y_train.mode()[0]
y_pred = np.full_like(y_val, fill_value=majority_class)
accuracy_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
Get confusion matrix for majority class baseline
###Code
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
def plot_confusion_matrix(y_true, y_pred, normalize=True):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
cm = confusion_matrix(y_true, y_pred)
if normalize:
cm = cm/cm.sum(axis=1).reshape(len(labels), 1)
table = pd.DataFrame(cm,
columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='.2f', cmap='viridis')
plot_confusion_matrix(y_val, y_pred);
plot_confusion_matrix(y_val, y_pred, normalize=False);
###Output
_____no_output_____
###Markdown
Get precision & recall for majority class baseline
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
/home/akim/anaconda/lib/python3.7/site-packages/sklearn/metrics/classification.py:1143: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
/home/akim/anaconda/lib/python3.7/site-packages/sklearn/metrics/classification.py:1143: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
###Markdown
Get ROC AUC score for majority class baseline[sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html)
###Code
from sklearn.metrics import roc_auc_score, roc_curve
# What if we predicted 100% probability of the positive class for every prediction?
# This is like the majority class baseline, but with predicted probabilities,
# instead of just discrete classes.
# VERY IMPORTANT — Use predicted probabilities with ROC AUC score!
# Because, it's a metric of how well you rank/sort predicted probabilities.
y_pred_proba = np.full_like(y_val, fill_value=1.00)
roc_auc_score(y_val, y_pred_proba)
# ROC AUC is 0.50 by definition when predicting any constant probability value
y_pred_proba = np.full_like(y_val, fill_value=0)
roc_auc_score(y_val, y_pred_proba)
y_pred_proba = np.full_like(y_val, fill_value=0.50)
roc_auc_score(y_val, y_pred_proba)
y_val.value_counts()
y_pred_proba = np.full_like(y_val, fill_value=0.70)
# Plot ROC curve
fpr, tpr, thresholds = roc_curve(y_val=='Charged Off', y_pred_proba)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
###Output
_____no_output_____
###Markdown
Fit a model Count missing values
###Code
null_counts = X_train.isnull().sum().sort_values(ascending=False)
null_counts.reset_index()
many_nulls = null_counts[:73].index
print(list(many_nulls))
###Output
['member_id', 'sec_app_mths_since_last_major_derog', 'sec_app_revol_util', 'sec_app_earliest_cr_line', 'sec_app_fico_range_low', 'sec_app_open_act_il', 'revol_bal_joint', 'sec_app_collections_12_mths_ex_med', 'sec_app_open_acc', 'sec_app_fico_range_high', 'sec_app_inq_last_6mths', 'sec_app_chargeoff_within_12_mths', 'sec_app_num_rev_accts', 'sec_app_mort_acc', 'dti_joint', 'annual_inc_joint', 'desc', 'mths_since_last_record', 'mths_since_recent_bc_dlq', 'mths_since_last_major_derog', 'mths_since_recent_revol_delinq', 'il_util', 'mths_since_rcnt_il', 'inq_last_12m', 'open_act_il', 'open_acc_6m', 'inq_fi', 'max_bal_bc', 'total_bal_il', 'all_util', 'open_rv_24m', 'open_il_12m', 'open_rv_12m', 'total_cu_tl', 'open_il_24m', 'mths_since_last_delinq', 'mths_since_recent_inq', 'num_tl_120dpd_2m', 'mo_sin_old_il_acct', 'emp_title', 'emp_length', 'pct_tl_nvr_dlq', 'avg_cur_bal', 'tot_hi_cred_lim', 'num_tl_30dpd', 'num_accts_ever_120_pd', 'total_il_high_credit_limit', 'num_rev_tl_bal_gt_0', 'num_il_tl', 'tot_cur_bal', 'mo_sin_rcnt_tl', 'mo_sin_old_rev_tl_op', 'num_bc_tl', 'num_tl_90g_dpd_24m', 'num_actv_rev_tl', 'tot_coll_amt', 'num_rev_accts', 'mo_sin_rcnt_rev_tl_op', 'total_rev_hi_lim', 'num_tl_op_past_12m', 'num_actv_bc_tl', 'num_op_rev_tl', 'bc_util', 'percent_bc_gt_75', 'bc_open_to_buy', 'mths_since_recent_bc', 'num_bc_sats', 'num_sats', 'acc_open_past_24mths', 'total_bal_ex_mort', 'mort_acc', 'total_bc_limit', 'title']
###Markdown
Wrangle data
###Code
def wrangle(X):
X = X.copy()
# Engineer new feature for every feature: is the feature null?
for col in X:
X[col+'_NULL'] = X[col].isnull()
# Convert percentages from strings to floats
X['int_rate'] = X['int_rate'].str.strip('%').astype(float)
X['revol_util'] = X['revol_util'].str.strip('%').astype(float)
# Convert employment length from string to float
X['emp_length'] = X['emp_length'].str.replace(r'\D','').astype(float)
# Create features for three employee titles: teacher, manager, owner
X['emp_title'] = X['emp_title'].str.lower()
X['emp_title_teacher'] = X['emp_title'].str.contains('teacher', na=False)
X['emp_title_manager'] = X['emp_title'].str.contains('manager', na=False)
X['emp_title_owner'] = X['emp_title'].str.contains('owner', na=False)
# Get length of free text fields
X['title'] = X['title'].str.len()
X['desc'] = X['desc'].str.len()
X['emp_title'] = X['emp_title'].str.len()
# Convert sub_grade from string "A1"-"D5" to integer 1-20
sub_grade_ranks = {'A1': 1, 'A2': 2, 'A3': 3, 'A4': 4, 'A5': 5, 'B1': 6, 'B2': 7,
'B3': 8, 'B4': 9, 'B5': 10, 'C1': 11, 'C2': 12, 'C3': 13, 'C4': 14,
'C5': 15, 'D1': 16, 'D2': 17, 'D3': 18, 'D4': 19, 'D5': 20}
X['sub_grade'] = X['sub_grade'].map(sub_grade_ranks)
# Drop some columns
X = X.drop(columns='id') # Always unique
X = X.drop(columns='url') # Always unique
X = X.drop(columns='member_id') # Always null
X = X.drop(columns='grade') # Duplicative of sub_grade
X = X.drop(columns='zip_code') # High cardinality
# Only use these features which had nonzero permutation importances in earlier models
features = ['acc_open_past_24mths', 'addr_state', 'all_util', 'annual_inc',
'annual_inc_joint', 'avg_cur_bal', 'bc_open_to_buy', 'bc_util',
'collections_12_mths_ex_med', 'delinq_amnt', 'desc_NULL', 'dti',
'dti_joint', 'earliest_cr_line', 'emp_length', 'emp_length_NULL',
'emp_title', 'emp_title_NULL', 'emp_title_owner', 'fico_range_high',
'funded_amnt', 'home_ownership', 'inq_last_12m', 'inq_last_6mths',
'installment', 'int_rate', 'issue_d_month', 'issue_d_year', 'loan_amnt',
'max_bal_bc', 'mo_sin_old_il_acct', 'mo_sin_old_rev_tl_op',
'mo_sin_rcnt_rev_tl_op', 'mort_acc', 'mths_since_last_major_derog_NULL',
'mths_since_last_record', 'mths_since_recent_bc', 'mths_since_recent_inq',
'num_actv_bc_tl', 'num_actv_rev_tl', 'num_op_rev_tl', 'num_rev_tl_bal_gt_0',
'num_tl_120dpd_2m_NULL', 'open_rv_12m_NULL', 'open_rv_24m',
'pct_tl_nvr_dlq', 'percent_bc_gt_75', 'pub_rec_bankruptcies', 'purpose',
'revol_bal', 'revol_bal_joint', 'sec_app_earliest_cr_line',
'sec_app_fico_range_high', 'sec_app_open_acc', 'sec_app_open_act_il',
'sub_grade', 'term', 'title', 'title_NULL', 'tot_coll_amt',
'tot_hi_cred_lim', 'total_acc', 'total_bal_il', 'total_bc_limit',
'total_cu_tl', 'total_rev_hi_lim']
X = X[features]
# Return the wrangled dataframe
return X
X_train = wrangle(X_train)
X_val = wrangle(X_val)
X_test = wrangle(X_test)
import category_encoders as ce
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, n_jobs=-1, random_state=42)
)
pipeline.fit(X_train, y_train);
###Output
_____no_output_____
###Markdown
Get accuracy score for model
###Code
y_pred = pipeline.predict(X_val)
accuracy_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
Get confusion matrix for model
###Code
plot_confusion_matrix(y_val, y_pred);
plot_confusion_matrix(y_val, y_pred, normalize=False);
###Output
_____no_output_____
###Markdown
Get precision & recall for model
###Code
91+77
91/(91+77)
91/(91+3412)
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
Charged Off 0.54 0.03 0.05 3503
Fully Paid 0.83 1.00 0.90 16497
micro avg 0.83 0.83 0.83 20000
macro avg 0.68 0.51 0.48 20000
weighted avg 0.78 0.83 0.75 20000
###Markdown
Get ROC AUC score for model
###Code
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Understand ROC AUC (Receiver Operating Characteristic, Area Under the Curve) Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings."ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures how well a classifier ranks predicted probabilities. It ranges from 0 to 1. A naive majority class baseline will have an ROC AUC score of 0.5. Visualize the ROC curve by plotting true positive rate vs false positive rate at varying thresholds
###Code
from ipywidgets import interact, fixed
from sklearn.metrics import roc_auc_score, roc_curve
from sklearn.utils.multiclass import unique_labels
def set_threshold(y_true, y_pred_proba, threshold=0.5):
"""
For binary classification problems.
y_pred_proba : predicted probability of class 1
"""
# Apply threshold to predicted probabilities
# to get discrete predictions
class_0, class_1 = unique_labels(y_true)
y_pred = np.full_like(y_true, fill_value=class_0)
y_pred[y_pred_proba > threshold] = class_1
# Plot distribution of predicted probabilities
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color='red')
plt.title('Distribution of predicted probabilities')
plt.show()
# Calculate true positive rate and false positive rate
true_positives = (y_pred==y_true) & (y_pred==class_1)
false_positives = (y_pred!=y_true) & (y_pred==class_1)
actual_positives = (y_true==class_1)
actual_negatives = (y_true==class_0)
true_positive_rate = true_positives.sum() / actual_positives.sum()
false_positive_rate = false_positives.sum() / actual_negatives.sum()
print('False Positive Rate', false_positive_rate)
print('True Positive Rate', true_positive_rate)
# Plot ROC curve
fpr, tpr, thresholds = roc_curve(y_true==class_1, y_pred_proba)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# Plot point on ROC curve for the current threshold
plt.scatter(false_positive_rate, true_positive_rate)
plt.show()
# Show ROC AUC score
print('Area under the Receiver Operating Characteristic curve:',
roc_auc_score(y_true, y_pred_proba))
# Show confusion matrix & classification report
plot_confusion_matrix(y_true, y_pred)
print(classification_report(y_true, y_pred))
interact(set_threshold,
y_true=fixed(y_val),
y_pred_proba=fixed(y_pred_proba),
threshold=(0,1,0.05));
###Output
_____no_output_____
###Markdown
Use the class_weight parameter in scikit-learn Here's a fun demo you can explore! The next code cells do five things: 1. Generate dataWe use scikit-learn's [make_classification](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.html) function to generate fake data for a binary classification problem, based on several parameters, including:- Number of samples- Weights, meaning "the proportions of samples assigned to each class."- Class separation: "Larger values spread out the clusters/classes and make the classification task easier."(We are generating fake data so it is easy to visualize.) 2. Split dataWe split the data three ways, into train, validation, and test sets. (For this toy example, it's not really necessary to do a three-way split. A two-way split, or even no split, would be ok. But I'm trying to demonstrate good habits, even in toy examples, to avoid confusion.) 3. Fit modelWe use scikit-learn to fit a [Logistic Regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) on the training data.We use this model parameter:> **class_weight : _dict or ‘balanced’, default: None_**> Weights associated with classes in the form `{class_label: weight}`. If not given, all classes are supposed to have weight one.> The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as `n_samples / (n_classes * np.bincount(y))`. 4. Evaluate modelWe use our Logistic Regression model, which was fit on the training data, to generate predictions for the validation data.Then we print [scikit-learn's Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report), with many metrics, and also the accuracy score. We are comparing the correct labels to the Logistic Regression's predicted labels, for the validation set. 5. Visualize decision functionBased on these examples- https://imbalanced-learn.readthedocs.io/en/stable/auto_examples/combine/plot_comparison_combine.html- http://rasbt.github.io/mlxtend/user_guide/plotting/plot_decision_regions/example-1-decision-regions-in-2d
###Code
from sklearn.model_selection import train_test_split
def train_validation_test_split(
X, y, train_size=0.8, val_size=0.1, test_size=0.1,
random_state=None, shuffle=True):
assert train_size + val_size + test_size == 1
X_train_val, X_test, y_train_val, y_test = train_test_split(
X, y, test_size=test_size, random_state=random_state, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train_val, y_train_val, test_size=val_size/(train_size+val_size),
random_state=random_state, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test
n_classes=2
n_samples = 1000
weights = n_samples / (n_classes * np.bincount(y))
%matplotlib inline
from IPython.display import display
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.metrics import accuracy_score, classification_report
from sklearn.linear_model import LogisticRegression
from mlxtend.plotting import plot_decision_regions
#1. Generate data
# Try re-running the cell with different values for these parameters
n_samples = 1000
weights = (0.95, 0.05)
class_sep = 0.8
X, y = make_classification(n_samples=n_samples, n_features=2, n_informative=2,
n_redundant=0, n_repeated=0, n_classes=2,
n_clusters_per_class=1, weights=weights,
class_sep=class_sep, random_state=0)
# 2. Split data
# Uses our custom train_validation_test_split function
X_train, X_val, X_test, y_train, y_val, y_test = train_validation_test_split(
X, y, train_size=0.8, val_size=0.1, test_size=0.1, random_state=1)
# 3. Fit model
# Try re-running the cell with different values for this parameter
class_weight = {0: 1, 1: 5}
model = LogisticRegression(solver='lbfgs', class_weight=class_weight)
model.fit(X_train, y_train)
# 4. Evaluate model
y_pred = model.predict(X_val)
print(classification_report(y_val, y_pred))
plot_confusion_matrix(y_val, y_pred)
# 5. Visualize decision regions
plt.figure(figsize=(10, 6))
plot_decision_regions(X_val, y_val, model, legend=0);
###Output
precision recall f1-score support
0 1.00 0.98 0.99 96
1 0.67 1.00 0.80 4
micro avg 0.98 0.98 0.98 100
macro avg 0.83 0.99 0.89 100
weighted avg 0.99 0.98 0.98 100
###Markdown
_Lambda School Data Science, Unit 2 — Predictive Modeling_ ROC AUC Objectives - Understand why accuracy is a misleading metric when classes are imbalanced- Use classification metric: ROC AUC- Visualize the ROC curve by plotting true positive rate vs false positive rate at varying thresholds- Use the class_weight parameter in scikit-learn Libraries category_encoders- Local Anaconda: `conda install -c conda-forge category_encoders`- Google Colab: `pip install category_encoders` [mlxtend](http://rasbt.github.io/mlxtend/) (to plot decision regions)- Local Anaconda: `conda install -c conda-forge mlxtend`- Google Colab: Already installed [tqdm](https://tqdm.github.io/) (for progress bars)- Local Anaconda: `conda install -c conda-forge tqdm`- Google Colab: Already installed Downgrade Matplotlib? Need version != 3.1.1Because of this issue: [sns.heatmap top and bottom boxes are cut off](https://github.com/mwaskom/seaborn/issues/1773)> This was a matplotlib regression introduced in 3.1.1 which has been fixed in 3.1.2 (still forthcoming). For now the fix is to downgrade matplotlib to a prior version.This _isn't_ required for your homework, but is required to run this notebook. `pip install matplotlib==3.1.0`
###Code
! pip install matplotlib==3.1.0
###Output
Requirement already satisfied: matplotlib==3.1.0 in c:\programdata\anaconda3\lib\site-packages (3.1.0)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib==3.1.0) (1.0.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib==3.1.0) (2.3.0)
Requirement already satisfied: cycler>=0.10 in c:\programdata\anaconda3\lib\site-packages (from matplotlib==3.1.0) (0.10.0)
Requirement already satisfied: numpy>=1.11 in c:\programdata\anaconda3\lib\site-packages (from matplotlib==3.1.0) (1.15.4)
Requirement already satisfied: python-dateutil>=2.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib==3.1.0) (2.7.5)
Requirement already satisfied: setuptools in c:\programdata\anaconda3\lib\site-packages (from kiwisolver>=1.0.1->matplotlib==3.1.0) (40.6.3)
Requirement already satisfied: six in c:\programdata\anaconda3\lib\site-packages (from cycler>=0.10->matplotlib==3.1.0) (1.12.0)
###Markdown
Lending Club 🏦This lecture uses Lending Club data, historical and current. Predict if peer-to-peer loans are charged off or fully paid. Decide which loans to invest in. Background[According to Wikipedia,](https://en.wikipedia.org/wiki/Lending_Club)> Lending Club is the world's largest peer-to-peer lending platform. Lending Club enables borrowers to create unsecured personal loans between \\$1,000 and \\$40,000. The standard loan period is three years. **Investors can search and browse the loan listings on Lending Club website and select loans that they want to invest in based on the information supplied about the borrower, amount of loan, loan grade, and loan purpose.** Investors make money from interest. Lending Club makes money by charging borrowers an origination fee and investors a service fee.Lending Club's article about [Benefits of diversification](https://www.lendingclub.com/investing/investor-education/benefits-of-diversification) explains,> **With the investment minimum of \\$1,000, you can get up to 40 Notes at \$25 each.**You can read more good context in [Data-Driven Investment Strategies for Peer-to-Peer Lending: A Case Study for Teaching Data Science](https://www.liebertpub.com/doi/full/10.1089/big.2018.0092):> Current refers to a loan that is still being reimbursed in a timely manner. Late corresponds to a loan on which a payment is between 16 and 120 days overdue. If the payment is delayed by more than 121 days, the loan is considered to be in Default. If LendingClub has decided that the loan will not be paid off, then it is given the status of Charged-Off.> These dynamics imply that 5 months after the term of each loan has ended, every loan ends in one of two LendingClub states—fully paid or charged-off. We call these two statuses fully paid and defaulted, respectively, and we refer to a loan that has reached one of these statuses as expired. **One way to simplify the problem is to consider only loans that have expired at the time of analysis.**> A significant portion (13.5%) of loans ended in Default status; depending on how much of the loan was paid back, these loansmight have resulted in a significant loss to investors who had invested in them. The remainder was Fully Paid—the borrower fully reimbursed the loan’s outstanding balance with interest, and the investor earned a positive return on his or her investment. Therefore, to avoid unsuccessful investments, **our goal is to estimate which loans are more likely to default and which will yield low returns.**
###Code
import pandas as pd
pd.options.display.max_columns = 200
pd.options.display.max_rows = 200
LOCAL = '../data/lendingclub/'
WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Tree-Ensembles/master/data/lendingclub/'
source = WEB
# Stratified sample, 10% of expired Lending Club loans, grades A-D
# Source: https://www.lendingclub.com/info/download-data.action
history = pd.read_csv(source + 'lending-club-subset.csv')
history['issue_d'] = pd.to_datetime(history['issue_d'], infer_datetime_format=True)
# Current loans available for manual investing, June 17, 2019
# Source: https://www.lendingclub.com/browse/browse.action
current = pd.read_csv(source + 'primaryMarketNotes_browseNotes_1-RETAIL.csv')
# Print the number of observations
print(f'{len(history)} historical loans')
print(f'{len(current)} current loans')
# Calculate percent of each loan repaid
history['percent_paid'] = history['total_pymnt'] / history['funded_amnt']
# See percent paid for charged off vs fully paid loans
history.groupby('loan_status')['percent_paid'].describe()
###Output
_____no_output_____
###Markdown
Begin with baselines: expected value of random decisions
###Code
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.stats import percentileofscore
import seaborn as sns
from tqdm import tnrange
def simulate(n=10000, grades=['A','B','C','D'],
start_date='2007-07-01',
end_date='2019-03-01'):
"""
What if you picked 40 random loans for $25 investments?
How much would you have been paid back?
Repeat the simulation many times, and plot the distribution
of probable outcomes.
This doesn't consider fees or "time value of money."
"""
condition = ((history['grade'].isin(grades)) &
(history['issue_d'] >= start_date) &
(history['issue_d'] <= end_date))
possible = history[condition]
simulations = []
for _ in tnrange(n):
picks = possible.sample(40).copy()
picks['paid'] = 25 * picks['percent_paid']
paid = picks['paid'].sum()
simulations.append(paid)
simulations = pd.Series(simulations)
sns.distplot(simulations)
plt.axvline(x=1000)
percent = percentileofscore(simulations, 1000)
plt.title(f'{percent}% of simulations did not profit. {start_date}-{end_date}, {grades}')
simulate()
simulate(grades=['A'])
simulate(grades=['D'])
###Output
_____no_output_____
###Markdown
Wrangle data- Engineer date-based features- Remove features to avoid leakage- Do 3-way split, train/validate/test
###Code
# Engineer date-based features
# Transform earliest_cr_line to an integer:
# How many days the earliest credit line was open, before the loan was issued.
# For current loans available for manual investing, assume the loan will be issued today.
history['earliest_cr_line'] = pd.to_datetime(history['earliest_cr_line'], infer_datetime_format=True)
history['earliest_cr_line'] = history['issue_d'] - history['earliest_cr_line']
history['earliest_cr_line'] = history['earliest_cr_line'].dt.days
current['earliest_cr_line'] = pd.to_datetime(current['earliest_cr_line'], infer_datetime_format=True)
current['earliest_cr_line'] = pd.Timestamp.today() - current['earliest_cr_line']
current['earliest_cr_line'] = current['earliest_cr_line'].dt.days
# Transform earliest_cr_line for the secondary applicant
history['sec_app_earliest_cr_line'] = pd.to_datetime(history['sec_app_earliest_cr_line'], infer_datetime_format=True, errors='coerce')
history['sec_app_earliest_cr_line'] = history['issue_d'] - history['sec_app_earliest_cr_line']
history['sec_app_earliest_cr_line'] = history['sec_app_earliest_cr_line'].dt.days
current['sec_app_earliest_cr_line'] = pd.to_datetime(current['sec_app_earliest_cr_line'], infer_datetime_format=True, errors='coerce')
current['sec_app_earliest_cr_line'] = pd.Timestamp.today() - current['sec_app_earliest_cr_line']
current['sec_app_earliest_cr_line'] = current['sec_app_earliest_cr_line'].dt.days
# Engineer features for issue date year & month
history['issue_d_year'] = history['issue_d'].dt.year
history['issue_d_month'] = history['issue_d'].dt.month
current['issue_d_year'] = pd.Timestamp.today().year
current['issue_d_month'] = pd.Timestamp.today().month
# Use Python sets to compare the historical columns & current columns
common_columns = set(history.columns) & set(current.columns)
just_history = set(history.columns) - set(current.columns)
just_current = set(current.columns) - set(history.columns)
# Train on the historical data.
# For features, use only the common columns shared by the historical & current data.
# For the target, use `loan_status` ('Fully Paid' or 'Charged Off')
features = list(common_columns)
target = 'loan_status'
X = history[features]
y = history[target]
# Do train/validate/test 3-way split
from sklearn.model_selection import train_test_split
X_trainval, X_test, y_trainval, y_test = train_test_split(
X, y, test_size=20000, stratify=y, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_trainval, y_trainval, test_size=20000,
stratify=y_trainval, random_state=42)
print('X_train shape', X_train.shape)
print('y_train shape', y_train.shape)
print('X_val shape', X_val.shape)
print('y_val shape', y_val.shape)
print('X_test shape', X_test.shape)
print('y_test shape', y_test.shape)
###Output
_____no_output_____
###Markdown
Understand why accuracy is a misleading metric when classes are imbalanced Get accuracy score for majority class baseline
###Code
y_train.value_counts(normalize=True)
import numpy as np
from sklearn.metrics import accuracy_score
majority_class = y_train.mode()[0]
y_pred = np.full_like(y_val, fill_value=majority_class)
accuracy_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
Get confusion matrix for majority class baseline
###Code
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
def plot_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='d', cmap='viridis')
plot_confusion_matrix(y_val, y_pred);
###Output
_____no_output_____
###Markdown
Get precision & recall for majority class baseline
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
_____no_output_____
###Markdown
Get ROC AUC score for majority class baseline[sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html)
###Code
from sklearn.metrics import roc_auc_score
# What if we predicted 100% probability of the positive class for every prediction?
# This is like the majority class baseline, but with predicted probabilities,
# instead of just discrete classes.
# VERY IMPORTANT — Use predicted probabilities with ROC AUC score!
# Because, it's a metric of how well you rank/sort predicted probabilities.
y_pred_proba = np.full_like(y_val, fill_value=1.00)
roc_auc_score(y_val, y_pred_proba)
# ROC AUC is 0.50 by definition when predicting any constant probability value
y_pred_proba = np.full_like(y_val, fill_value=0)
roc_auc_score(y_val, y_pred_proba)
y_pred_proba = np.full_like(y_val, fill_value=0.50)
roc_auc_score(y_val, y_pred_proba)
y_val.value_counts()
# Plot ROC curve
fpr, tpr, thresholds = roc_curve(y_val=='Charged Off', y_pred_proba)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
###Output
_____no_output_____
###Markdown
Fit a model Count missing values
###Code
null_counts = X_train.isnull().sum().sort_values(ascending=False)
null_counts.reset_index()
many_nulls = null_counts[:73].index
print(list(many_nulls))
###Output
_____no_output_____
###Markdown
Wrangle data
###Code
def wrangle(X):
X = X.copy()
# Engineer new feature for every feature: is the feature null?
for col in X:
X[col+'_NULL'] = X[col].isnull()
# Convert percentages from strings to floats
X['int_rate'] = X['int_rate'].str.strip('%').astype(float)
X['revol_util'] = X['revol_util'].str.strip('%').astype(float)
# Convert employment length from string to float
X['emp_length'] = X['emp_length'].str.replace(r'\D','').astype(float)
# Create features for three employee titles: teacher, manager, owner
X['emp_title'] = X['emp_title'].str.lower()
X['emp_title_teacher'] = X['emp_title'].str.contains('teacher', na=False)
X['emp_title_manager'] = X['emp_title'].str.contains('manager', na=False)
X['emp_title_owner'] = X['emp_title'].str.contains('owner', na=False)
# Get length of free text fields
X['title'] = X['title'].str.len()
X['desc'] = X['desc'].str.len()
X['emp_title'] = X['emp_title'].str.len()
# Convert sub_grade from string "A1"-"D5" to integer 1-20
sub_grade_ranks = {'A1': 1, 'A2': 2, 'A3': 3, 'A4': 4, 'A5': 5, 'B1': 6, 'B2': 7,
'B3': 8, 'B4': 9, 'B5': 10, 'C1': 11, 'C2': 12, 'C3': 13, 'C4': 14,
'C5': 15, 'D1': 16, 'D2': 17, 'D3': 18, 'D4': 19, 'D5': 20}
X['sub_grade'] = X['sub_grade'].map(sub_grade_ranks)
# Drop some columns
X = X.drop(columns='id') # Always unique
X = X.drop(columns='url') # Always unique
X = X.drop(columns='member_id') # Always null
X = X.drop(columns='grade') # Duplicative of sub_grade
X = X.drop(columns='zip_code') # High cardinality
# Only use these features which had nonzero permutation importances in earlier models
features = ['acc_open_past_24mths', 'addr_state', 'all_util', 'annual_inc',
'annual_inc_joint', 'avg_cur_bal', 'bc_open_to_buy', 'bc_util',
'collections_12_mths_ex_med', 'delinq_amnt', 'desc_NULL', 'dti',
'dti_joint', 'earliest_cr_line', 'emp_length', 'emp_length_NULL',
'emp_title', 'emp_title_NULL', 'emp_title_owner', 'fico_range_high',
'funded_amnt', 'home_ownership', 'inq_last_12m', 'inq_last_6mths',
'installment', 'int_rate', 'issue_d_month', 'issue_d_year', 'loan_amnt',
'max_bal_bc', 'mo_sin_old_il_acct', 'mo_sin_old_rev_tl_op',
'mo_sin_rcnt_rev_tl_op', 'mort_acc', 'mths_since_last_major_derog_NULL',
'mths_since_last_record', 'mths_since_recent_bc', 'mths_since_recent_inq',
'num_actv_bc_tl', 'num_actv_rev_tl', 'num_op_rev_tl', 'num_rev_tl_bal_gt_0',
'num_tl_120dpd_2m_NULL', 'open_rv_12m_NULL', 'open_rv_24m',
'pct_tl_nvr_dlq', 'percent_bc_gt_75', 'pub_rec_bankruptcies', 'purpose',
'revol_bal', 'revol_bal_joint', 'sec_app_earliest_cr_line',
'sec_app_fico_range_high', 'sec_app_open_acc', 'sec_app_open_act_il',
'sub_grade', 'term', 'title', 'title_NULL', 'tot_coll_amt',
'tot_hi_cred_lim', 'total_acc', 'total_bal_il', 'total_bc_limit',
'total_cu_tl', 'total_rev_hi_lim']
X = X[features]
# Return the wrangled dataframe
return X
X_train = wrangle(X_train)
X_val = wrangle(X_val)
X_test = wrangle(X_test)
%%time
import category_encoders as ce
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, n_jobs=-1, random_state=42)
)
pipeline.fit(X_train, y_train);
###Output
_____no_output_____
###Markdown
Get accuracy score for model
###Code
y_pred = pipeline.predict(X_val)
accuracy_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
Get confusion matrix for model
###Code
plot_confusion_matrix(y_val, y_pred);
###Output
_____no_output_____
###Markdown
Get precision & recall for model
###Code
print(classification_report(y_val, y_pred))
###Output
_____no_output_____
###Markdown
Get ROC AUC score for model
###Code
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Understand ROC AUC (Receiver Operating Characteristic, Area Under the Curve) Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings."ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures how well a classifier ranks predicted probabilities. It ranges from 0 to 1. A naive majority class baseline will have an ROC AUC score of 0.5. Visualize the ROC curve by plotting true positive rate vs false positive rate at varying thresholds
###Code
from ipywidgets import interact, fixed
from sklearn.metrics import roc_auc_score, roc_curve
from sklearn.utils.multiclass import unique_labels
def set_threshold(y_true, y_pred_proba, threshold=0.5):
"""
For binary classification problems.
y_pred_proba : predicted probability of class 1
"""
# Apply threshold to predicted probabilities
# to get discrete predictions
class_0, class_1 = unique_labels(y_true)
y_pred = np.full_like(y_true, fill_value=class_0)
y_pred[y_pred_proba > threshold] = class_1
# Plot distribution of predicted probabilities
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color='red')
plt.title('Distribution of predicted probabilities')
plt.show()
# Calculate true positive rate and false positive rate
true_positives = (y_pred==y_true) & (y_pred==class_1)
false_positives = (y_pred!=y_true) & (y_pred==class_1)
actual_positives = (y_true==class_1)
actual_negatives = (y_true==class_0)
true_positive_rate = true_positives.sum() / actual_positives.sum()
false_positive_rate = false_positives.sum() / actual_negatives.sum()
print('False Positive Rate', false_positive_rate)
print('True Positive Rate', true_positive_rate)
# Plot ROC curve
fpr, tpr, thresholds = roc_curve(y_true==class_1, y_pred_proba)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# Plot point on ROC curve for the current threshold
plt.scatter(false_positive_rate, true_positive_rate)
plt.show()
# Show ROC AUC score
print('Area under the Receiver Operating Characteristic curve:',
roc_auc_score(y_true, y_pred_proba))
# Show confusion matrix & classification report
plot_confusion_matrix(y_true, y_pred)
print(classification_report(y_true, y_pred))
interact(set_threshold,
y_true=fixed(y_val),
y_pred_proba=fixed(y_pred_proba),
threshold=(0,1,0.05));
###Output
_____no_output_____
###Markdown
Use the class_weight parameter in scikit-learn Here's a fun demo you can explore! The next code cells do five things: 1. Generate dataWe use scikit-learn's [make_classification](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.html) function to generate fake data for a binary classification problem, based on several parameters, including:- Number of samples- Weights, meaning "the proportions of samples assigned to each class."- Class separation: "Larger values spread out the clusters/classes and make the classification task easier."(We are generating fake data so it is easy to visualize.) 2. Split dataWe split the data three ways, into train, validation, and test sets. (For this toy example, it's not really necessary to do a three-way split. A two-way split, or even no split, would be ok. But I'm trying to demonstrate good habits, even in toy examples, to avoid confusion.) 3. Fit modelWe use scikit-learn to fit a [Logistic Regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) on the training data.We use this model parameter:> **class_weight : _dict or ‘balanced’, default: None_**> Weights associated with classes in the form `{class_label: weight}`. If not given, all classes are supposed to have weight one.> The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as `n_samples / (n_classes * np.bincount(y))`. 4. Evaluate modelWe use our Logistic Regression model, which was fit on the training data, to generate predictions for the validation data.Then we print [scikit-learn's Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report), with many metrics, and also the accuracy score. We are comparing the correct labels to the Logistic Regression's predicted labels, for the validation set. 5. Visualize decision functionBased on these examples- https://imbalanced-learn.readthedocs.io/en/stable/auto_examples/combine/plot_comparison_combine.html- http://rasbt.github.io/mlxtend/user_guide/plotting/plot_decision_regions/example-1-decision-regions-in-2d
###Code
from sklearn.model_selection import train_test_split
def train_validation_test_split(
X, y, train_size=0.8, val_size=0.1, test_size=0.1,
random_state=None, shuffle=True):
assert train_size + val_size + test_size == 1
X_train_val, X_test, y_train_val, y_test = train_test_split(
X, y, test_size=test_size, random_state=random_state, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train_val, y_train_val, test_size=val_size/(train_size+val_size),
random_state=random_state, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test
%matplotlib inline
from IPython.display import display
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.metrics import accuracy_score, classification_report
from sklearn.linear_model import LogisticRegression
from mlxtend.plotting import plot_decision_regions
#1. Generate data
# Try re-running the cell with different values for these parameters
n_samples = 1000
weights = (0.95, 0.05)
class_sep = 0.8
X, y = make_classification(n_samples=n_samples, n_features=2, n_informative=2,
n_redundant=0, n_repeated=0, n_classes=2,
n_clusters_per_class=1, weights=weights,
class_sep=class_sep, random_state=0)
# 2. Split data
# Uses our custom train_validation_test_split function
X_train, X_val, X_test, y_train, y_val, y_test = train_validation_test_split(
X, y, train_size=0.8, val_size=0.1, test_size=0.1, random_state=1)
# 3. Fit model
# Try re-running the cell with different values for this parameter
class_weight = None
model = LogisticRegression(solver='lbfgs', class_weight=class_weight)
model.fit(X_train, y_train)
# 4. Evaluate model
y_pred = model.predict(X_val)
print(classification_report(y_val, y_pred))
plot_confusion_matrix(y_val, y_pred)
# 5. Visualize decision regions
plt.figure(figsize=(10, 6))
plot_decision_regions(X_val, y_val, model, legend=0);
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2 — Predictive Modeling_ ROC AUC Objectives - Understand why accuracy is a misleading metric when classes are imbalanced- Use classification metric: ROC AUC- Visualize the ROC curve by plotting true positive rate vs false positive rate at varying thresholds- Use the class_weight parameter in scikit-learn Libraries category_encoders- Local Anaconda: `conda install -c conda-forge category_encoders`- Google Colab: `pip install category_encoders` [mlxtend](http://rasbt.github.io/mlxtend/) (to plot decision regions)- Local Anaconda: `conda install -c conda-forge mlxtend`- Google Colab: Already installed [tqdm](https://tqdm.github.io/) (for progress bars)- Local Anaconda: `conda install -c conda-forge tqdm`- Google Colab: Already installed Downgrade Matplotlib? Need version != 3.1.1Because of this issue: [sns.heatmap top and bottom boxes are cut off](https://github.com/mwaskom/seaborn/issues/1773)> This was a matplotlib regression introduced in 3.1.1 which has been fixed in 3.1.2 (still forthcoming). For now the fix is to downgrade matplotlib to a prior version.This _isn't_ required for your homework, but is required to run this notebook. `pip install matplotlib==3.1.0`
###Code
# !pip install category_encoders matplotlib==3.1.0
###Output
_____no_output_____
###Markdown
Lending Club 🏦This lecture uses Lending Club data, historical and current. Predict if peer-to-peer loans are charged off or fully paid. Decide which loans to invest in. Background[According to Wikipedia,](https://en.wikipedia.org/wiki/Lending_Club)> Lending Club is the world's largest peer-to-peer lending platform. Lending Club enables borrowers to create unsecured personal loans between \\$1,000 and \\$40,000. The standard loan period is three years. **Investors can search and browse the loan listings on Lending Club website and select loans that they want to invest in based on the information supplied about the borrower, amount of loan, loan grade, and loan purpose.** Investors make money from interest. Lending Club makes money by charging borrowers an origination fee and investors a service fee.Lending Club's article about [Benefits of diversification](https://www.lendingclub.com/investing/investor-education/benefits-of-diversification) explains,> **With the investment minimum of \\$1,000, you can get up to 40 Notes at \$25 each.**You can read more good context in [Data-Driven Investment Strategies for Peer-to-Peer Lending: A Case Study for Teaching Data Science](https://www.liebertpub.com/doi/full/10.1089/big.2018.0092):> Current refers to a loan that is still being reimbursed in a timely manner. Late corresponds to a loan on which a payment is between 16 and 120 days overdue. If the payment is delayed by more than 121 days, the loan is considered to be in Default. If LendingClub has decided that the loan will not be paid off, then it is given the status of Charged-Off.> These dynamics imply that 5 months after the term of each loan has ended, every loan ends in one of two LendingClub states—fully paid or charged-off. We call these two statuses fully paid and defaulted, respectively, and we refer to a loan that has reached one of these statuses as expired. **One way to simplify the problem is to consider only loans that have expired at the time of analysis.**> A significant portion (13.5%) of loans ended in Default status; depending on how much of the loan was paid back, these loansmight have resulted in a significant loss to investors who had invested in them. The remainder was Fully Paid—the borrower fully reimbursed the loan’s outstanding balance with interest, and the investor earned a positive return on his or her investment. Therefore, to avoid unsuccessful investments, **our goal is to estimate which loans are more likely to default and which will yield low returns.**
###Code
import pandas as pd
pd.options.display.max_columns = 200
pd.options.display.max_rows = 200
LOCAL = '../data/lendingclub/'
WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Tree-Ensembles/master/data/lendingclub/'
source = WEB
# Stratified sample, 10% of expired Lending Club loans, grades A-D
# Source: https://www.lendingclub.com/info/download-data.action
history = pd.read_csv(source + 'lending-club-subset.csv')
history['issue_d'] = pd.to_datetime(history['issue_d'], infer_datetime_format=True)
# Current loans available for manual investing, June 17, 2019
# Source: https://www.lendingclub.com/browse/browse.action
current = pd.read_csv(source + 'primaryMarketNotes_browseNotes_1-RETAIL.csv')
# Print the number of observations
print(f'{len(history)} historical loans')
print(f'{len(current)} current loans')
# Calculate percent of each loan repaid
history['percent_paid'] = history['total_pymnt'] / history['funded_amnt']
# See percent paid for charged off vs fully paid loans
history.groupby('loan_status')['percent_paid'].describe()
###Output
_____no_output_____
###Markdown
Begin with baselines: expected value of random decisions
###Code
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.stats import percentileofscore
import seaborn as sns
from tqdm import tnrange
def simulate(n=10000, grades=['A','B','C','D'],
start_date='2007-07-01',
end_date='2019-03-01'):
"""
What if you picked 40 random loans for $25 investments?
How much would you have been paid back?
Repeat the simulation many times, and plot the distribution
of probable outcomes.
This doesn't consider fees or "time value of money."
"""
condition = ((history['grade'].isin(grades)) &
(history['issue_d'] >= start_date) &
(history['issue_d'] <= end_date))
possible = history[condition]
simulations = []
for _ in tnrange(n):
picks = possible.sample(40).copy()
picks['paid'] = 25 * picks['percent_paid']
paid = picks['paid'].sum()
simulations.append(paid)
simulations = pd.Series(simulations)
sns.distplot(simulations)
plt.axvline(x=1000)
percent = percentileofscore(simulations, 1000)
plt.title(f'{percent}% of simulations did not profit. {start_date}-{end_date}, {grades}')
simulate()
simulate(grades=['A'])
simulate(grades=['D'])
###Output
_____no_output_____
###Markdown
Wrangle data- Engineer date-based features- Remove features to avoid leakage- Do 3-way split, train/validate/test
###Code
# Engineer date-based features
# Transform earliest_cr_line to an integer:
# How many days the earliest credit line was open, before the loan was issued.
# For current loans available for manual investing, assume the loan will be issued today.
history['earliest_cr_line'] = pd.to_datetime(history['earliest_cr_line'], infer_datetime_format=True)
history['earliest_cr_line'] = history['issue_d'] - history['earliest_cr_line']
history['earliest_cr_line'] = history['earliest_cr_line'].dt.days
current['earliest_cr_line'] = pd.to_datetime(current['earliest_cr_line'], infer_datetime_format=True)
current['earliest_cr_line'] = pd.Timestamp.today() - current['earliest_cr_line']
current['earliest_cr_line'] = current['earliest_cr_line'].dt.days
# Transform earliest_cr_line for the secondary applicant
history['sec_app_earliest_cr_line'] = pd.to_datetime(history['sec_app_earliest_cr_line'], infer_datetime_format=True, errors='coerce')
history['sec_app_earliest_cr_line'] = history['issue_d'] - history['sec_app_earliest_cr_line']
history['sec_app_earliest_cr_line'] = history['sec_app_earliest_cr_line'].dt.days
current['sec_app_earliest_cr_line'] = pd.to_datetime(current['sec_app_earliest_cr_line'], infer_datetime_format=True, errors='coerce')
current['sec_app_earliest_cr_line'] = pd.Timestamp.today() - current['sec_app_earliest_cr_line']
current['sec_app_earliest_cr_line'] = current['sec_app_earliest_cr_line'].dt.days
# Engineer features for issue date year & month
history['issue_d_year'] = history['issue_d'].dt.year
history['issue_d_month'] = history['issue_d'].dt.month
current['issue_d_year'] = pd.Timestamp.today().year
current['issue_d_month'] = pd.Timestamp.today().month
# Use Python sets to compare the historical columns & current columns
common_columns = set(history.columns) & set(current.columns)
just_history = set(history.columns) - set(current.columns)
just_current = set(current.columns) - set(history.columns)
# Train on the historical data.
# For features, use only the common columns shared by the historical & current data.
# For the target, use `loan_status` ('Fully Paid' or 'Charged Off')
features = list(common_columns)
target = 'loan_status'
X = history[features]
y = history[target]
# Do train/validate/test 3-way split
from sklearn.model_selection import train_test_split
X_trainval, X_test, y_trainval, y_test = train_test_split(
X, y, test_size=20000, stratify=y, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_trainval, y_trainval, test_size=20000,
stratify=y_trainval, random_state=42)
print('X_train shape', X_train.shape)
print('y_train shape', y_train.shape)
print('X_val shape', X_val.shape)
print('y_val shape', y_val.shape)
print('X_test shape', X_test.shape)
print('y_test shape', y_test.shape)
###Output
_____no_output_____
###Markdown
Understand why accuracy is a misleading metric when classes are imbalanced Get accuracy score for majority class baseline
###Code
y_train.value_counts(normalize=True)
import numpy as np
from sklearn.metrics import accuracy_score
majority_class = y_train.mode()[0]
y_pred = np.full_like(y_val, fill_value=majority_class)
accuracy_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
Get confusion matrix for majority class baseline
###Code
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
def plot_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='d', cmap='viridis')
plot_confusion_matrix(y_val, y_pred);
###Output
_____no_output_____
###Markdown
Get precision & recall for majority class baseline
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
_____no_output_____
###Markdown
Get ROC AUC score for majority class baseline[sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html)
###Code
from sklearn.metrics import roc_auc_score
# What if we predicted 100% probability of the positive class for every prediction?
# This is like the majority class baseline, but with predicted probabilities,
# instead of just discrete classes.
# VERY IMPORTANT — Use predicted probabilities with ROC AUC score!
# Because, it's a metric of how well you rank/sort predicted probabilities.
y_pred_proba = np.full_like(y_val, fill_value=1.00)
roc_auc_score(y_val, y_pred_proba)
# ROC AUC is 0.50 by definition when predicting any constant probability value
y_pred_proba = np.full_like(y_val, fill_value=0)
roc_auc_score(y_val, y_pred_proba)
y_pred_proba = np.full_like(y_val, fill_value=0.50)
roc_auc_score(y_val, y_pred_proba)
y_val.value_counts()
# Plot ROC curve
fpr, tpr, thresholds = roc_curve(y_val=='Charged Off', y_pred_proba)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
###Output
_____no_output_____
###Markdown
Fit a model Count missing values
###Code
null_counts = X_train.isnull().sum().sort_values(ascending=False)
null_counts.reset_index()
many_nulls = null_counts[:73].index
print(list(many_nulls))
###Output
_____no_output_____
###Markdown
Wrangle data
###Code
def wrangle(X):
X = X.copy()
# Engineer new feature for every feature: is the feature null?
for col in X:
X[col+'_NULL'] = X[col].isnull()
# Convert percentages from strings to floats
X['int_rate'] = X['int_rate'].str.strip('%').astype(float)
X['revol_util'] = X['revol_util'].str.strip('%').astype(float)
# Convert employment length from string to float
X['emp_length'] = X['emp_length'].str.replace(r'\D','').astype(float)
# Create features for three employee titles: teacher, manager, owner
X['emp_title'] = X['emp_title'].str.lower()
X['emp_title_teacher'] = X['emp_title'].str.contains('teacher', na=False)
X['emp_title_manager'] = X['emp_title'].str.contains('manager', na=False)
X['emp_title_owner'] = X['emp_title'].str.contains('owner', na=False)
# Get length of free text fields
X['title'] = X['title'].str.len()
X['desc'] = X['desc'].str.len()
X['emp_title'] = X['emp_title'].str.len()
# Convert sub_grade from string "A1"-"D5" to integer 1-20
sub_grade_ranks = {'A1': 1, 'A2': 2, 'A3': 3, 'A4': 4, 'A5': 5, 'B1': 6, 'B2': 7,
'B3': 8, 'B4': 9, 'B5': 10, 'C1': 11, 'C2': 12, 'C3': 13, 'C4': 14,
'C5': 15, 'D1': 16, 'D2': 17, 'D3': 18, 'D4': 19, 'D5': 20}
X['sub_grade'] = X['sub_grade'].map(sub_grade_ranks)
# Drop some columns
X = X.drop(columns='id') # Always unique
X = X.drop(columns='url') # Always unique
X = X.drop(columns='member_id') # Always null
X = X.drop(columns='grade') # Duplicative of sub_grade
X = X.drop(columns='zip_code') # High cardinality
# Only use these features which had nonzero permutation importances in earlier models
features = ['acc_open_past_24mths', 'addr_state', 'all_util', 'annual_inc',
'annual_inc_joint', 'avg_cur_bal', 'bc_open_to_buy', 'bc_util',
'collections_12_mths_ex_med', 'delinq_amnt', 'desc_NULL', 'dti',
'dti_joint', 'earliest_cr_line', 'emp_length', 'emp_length_NULL',
'emp_title', 'emp_title_NULL', 'emp_title_owner', 'fico_range_high',
'funded_amnt', 'home_ownership', 'inq_last_12m', 'inq_last_6mths',
'installment', 'int_rate', 'issue_d_month', 'issue_d_year', 'loan_amnt',
'max_bal_bc', 'mo_sin_old_il_acct', 'mo_sin_old_rev_tl_op',
'mo_sin_rcnt_rev_tl_op', 'mort_acc', 'mths_since_last_major_derog_NULL',
'mths_since_last_record', 'mths_since_recent_bc', 'mths_since_recent_inq',
'num_actv_bc_tl', 'num_actv_rev_tl', 'num_op_rev_tl', 'num_rev_tl_bal_gt_0',
'num_tl_120dpd_2m_NULL', 'open_rv_12m_NULL', 'open_rv_24m',
'pct_tl_nvr_dlq', 'percent_bc_gt_75', 'pub_rec_bankruptcies', 'purpose',
'revol_bal', 'revol_bal_joint', 'sec_app_earliest_cr_line',
'sec_app_fico_range_high', 'sec_app_open_acc', 'sec_app_open_act_il',
'sub_grade', 'term', 'title', 'title_NULL', 'tot_coll_amt',
'tot_hi_cred_lim', 'total_acc', 'total_bal_il', 'total_bc_limit',
'total_cu_tl', 'total_rev_hi_lim']
X = X[features]
# Return the wrangled dataframe
return X
X_train = wrangle(X_train)
X_val = wrangle(X_val)
X_test = wrangle(X_test)
%%time
import category_encoders as ce
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, n_jobs=-1, random_state=42)
)
pipeline.fit(X_train, y_train);
###Output
_____no_output_____
###Markdown
Get accuracy score for model
###Code
y_pred = pipeline.predict(X_val)
accuracy_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
Get confusion matrix for model
###Code
plot_confusion_matrix(y_val, y_pred);
###Output
_____no_output_____
###Markdown
Get precision & recall for model
###Code
print(classification_report(y_val, y_pred))
###Output
_____no_output_____
###Markdown
Get ROC AUC score for model
###Code
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Understand ROC AUC (Receiver Operating Characteristic, Area Under the Curve) Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings."ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures how well a classifier ranks predicted probabilities. It ranges from 0 to 1. A naive majority class baseline will have an ROC AUC score of 0.5. Visualize the ROC curve by plotting true positive rate vs false positive rate at varying thresholds
###Code
from ipywidgets import interact, fixed
from sklearn.metrics import roc_auc_score, roc_curve
from sklearn.utils.multiclass import unique_labels
def set_threshold(y_true, y_pred_proba, threshold=0.5):
"""
For binary classification problems.
y_pred_proba : predicted probability of class 1
"""
# Apply threshold to predicted probabilities
# to get discrete predictions
class_0, class_1 = unique_labels(y_true)
y_pred = np.full_like(y_true, fill_value=class_0)
y_pred[y_pred_proba > threshold] = class_1
# Plot distribution of predicted probabilities
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color='red')
plt.title('Distribution of predicted probabilities')
plt.show()
# Calculate true positive rate and false positive rate
true_positives = (y_pred==y_true) & (y_pred==class_1)
false_positives = (y_pred!=y_true) & (y_pred==class_1)
actual_positives = (y_true==class_1)
actual_negatives = (y_true==class_0)
true_positive_rate = true_positives.sum() / actual_positives.sum()
false_positive_rate = false_positives.sum() / actual_negatives.sum()
print('False Positive Rate', false_positive_rate)
print('True Positive Rate', true_positive_rate)
# Plot ROC curve
fpr, tpr, thresholds = roc_curve(y_true==class_1, y_pred_proba)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# Plot point on ROC curve for the current threshold
plt.scatter(false_positive_rate, true_positive_rate)
plt.show()
# Show ROC AUC score
print('Area under the Receiver Operating Characteristic curve:',
roc_auc_score(y_true, y_pred_proba))
# Show confusion matrix & classification report
plot_confusion_matrix(y_true, y_pred)
print(classification_report(y_true, y_pred))
interact(set_threshold,
y_true=fixed(y_val),
y_pred_proba=fixed(y_pred_proba),
threshold=(0,1,0.05));
###Output
_____no_output_____
###Markdown
Use the class_weight parameter in scikit-learn Here's a fun demo you can explore! The next code cells do five things: 1. Generate dataWe use scikit-learn's [make_classification](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.html) function to generate fake data for a binary classification problem, based on several parameters, including:- Number of samples- Weights, meaning "the proportions of samples assigned to each class."- Class separation: "Larger values spread out the clusters/classes and make the classification task easier."(We are generating fake data so it is easy to visualize.) 2. Split dataWe split the data three ways, into train, validation, and test sets. (For this toy example, it's not really necessary to do a three-way split. A two-way split, or even no split, would be ok. But I'm trying to demonstrate good habits, even in toy examples, to avoid confusion.) 3. Fit modelWe use scikit-learn to fit a [Logistic Regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) on the training data.We use this model parameter:> **class_weight : _dict or ‘balanced’, default: None_**> Weights associated with classes in the form `{class_label: weight}`. If not given, all classes are supposed to have weight one.> The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as `n_samples / (n_classes * np.bincount(y))`. 4. Evaluate modelWe use our Logistic Regression model, which was fit on the training data, to generate predictions for the validation data.Then we print [scikit-learn's Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report), with many metrics, and also the accuracy score. We are comparing the correct labels to the Logistic Regression's predicted labels, for the validation set. 5. Visualize decision functionBased on these examples- https://imbalanced-learn.readthedocs.io/en/stable/auto_examples/combine/plot_comparison_combine.html- http://rasbt.github.io/mlxtend/user_guide/plotting/plot_decision_regions/example-1-decision-regions-in-2d
###Code
from sklearn.model_selection import train_test_split
def train_validation_test_split(
X, y, train_size=0.8, val_size=0.1, test_size=0.1,
random_state=None, shuffle=True):
assert train_size + val_size + test_size == 1
X_train_val, X_test, y_train_val, y_test = train_test_split(
X, y, test_size=test_size, random_state=random_state, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train_val, y_train_val, test_size=val_size/(train_size+val_size),
random_state=random_state, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test
%matplotlib inline
from IPython.display import display
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.metrics import accuracy_score, classification_report
from sklearn.linear_model import LogisticRegression
from mlxtend.plotting import plot_decision_regions
#1. Generate data
# Try re-running the cell with different values for these parameters
n_samples = 1000
weights = (0.95, 0.05)
class_sep = 0.8
X, y = make_classification(n_samples=n_samples, n_features=2, n_informative=2,
n_redundant=0, n_repeated=0, n_classes=2,
n_clusters_per_class=1, weights=weights,
class_sep=class_sep, random_state=0)
# 2. Split data
# Uses our custom train_validation_test_split function
X_train, X_val, X_test, y_train, y_val, y_test = train_validation_test_split(
X, y, train_size=0.8, val_size=0.1, test_size=0.1, random_state=1)
# 3. Fit model
# Try re-running the cell with different values for this parameter
class_weight = None
model = LogisticRegression(solver='lbfgs', class_weight=class_weight)
model.fit(X_train, y_train)
# 4. Evaluate model
y_pred = model.predict(X_val)
print(classification_report(y_val, y_pred))
plot_confusion_matrix(y_val, y_pred)
# 5. Visualize decision regions
plt.figure(figsize=(10, 6))
plot_decision_regions(X_val, y_val, model, legend=0);
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2 — Predictive Modeling_ ROC AUC Objectives - Understand why accuracy is a misleading metric when classes are imbalanced- Use classification metric: ROC AUC- Visualize the ROC curve by plotting true positive rate vs false positive rate at varying thresholds- Use the class_weight parameter in scikit-learn Libraries category_encoders- Local Anaconda: `conda install -c conda-forge category_encoders`- Google Colab: `pip install category_encoders` [mlxtend](http://rasbt.github.io/mlxtend/) (to plot decision regions)- Local Anaconda: `conda install -c conda-forge mlxtend`- Google Colab: Already installed [tqdm](https://tqdm.github.io/) (for progress bars)- Local Anaconda: `conda install -c conda-forge tqdm`- Google Colab: Already installed
###Code
# !pip install category_encoders
###Output
_____no_output_____
###Markdown
Lending Club 🏦This lecture uses Lending Club data, historical and current. Predict if peer-to-peer loans are charged off or fully paid. Decide which loans to invest in. Background[According to Wikipedia,](https://en.wikipedia.org/wiki/Lending_Club)> Lending Club is the world's largest peer-to-peer lending platform. Lending Club enables borrowers to create unsecured personal loans between \\$1,000 and \\$40,000. The standard loan period is three years. **Investors can search and browse the loan listings on Lending Club website and select loans that they want to invest in based on the information supplied about the borrower, amount of loan, loan grade, and loan purpose.** Investors make money from interest. Lending Club makes money by charging borrowers an origination fee and investors a service fee.Lending Club's article about [Benefits of diversification](https://www.lendingclub.com/investing/investor-education/benefits-of-diversification) explains,> **With the investment minimum of \\$1,000, you can get up to 40 Notes at \$25 each.**You can read more good context in [Data-Driven Investment Strategies for Peer-to-Peer Lending: A Case Study for Teaching Data Science](https://www.liebertpub.com/doi/full/10.1089/big.2018.0092):> Current refers to a loan that is still being reimbursed in a timely manner. Late corresponds to a loan on which a payment is between 16 and 120 days overdue. If the payment is delayed by more than 121 days, the loan is considered to be in Default. If LendingClub has decided that the loan will not be paid off, then it is given the status of Charged-Off.> These dynamics imply that 5 months after the term of each loan has ended, every loan ends in one of two LendingClub states—fully paid or charged-off. We call these two statuses fully paid and defaulted, respectively, and we refer to a loan that has reached one of these statuses as expired. **One way to simplify the problem is to consider only loans that have expired at the time of analysis.**> A significant portion (13.5%) of loans ended in Default status; depending on how much of the loan was paid back, these loansmight have resulted in a significant loss to investors who had invested in them. The remainder was Fully Paid—the borrower fully reimbursed the loan’s outstanding balance with interest, and the investor earned a positive return on his or her investment. Therefore, to avoid unsuccessful investments, **our goal is to estimate which loans are more likely to default and which will yield low returns.**
###Code
import pandas as pd
pd.options.display.max_columns = 200
pd.options.display.max_rows = 200
LOCAL = '../data/lendingclub/'
WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Tree-Ensembles/master/data/lendingclub/'
source = WEB
# Stratified sample, 10% of expired Lending Club loans, grades A-D
# Source: https://www.lendingclub.com/info/download-data.action
history = pd.read_csv(source + 'lending-club-subset.csv')
history['issue_d'] = pd.to_datetime(history['issue_d'], infer_datetime_format=True)
# Current loans available for manual investing, June 17, 2019
# Source: https://www.lendingclub.com/browse/browse.action
current = pd.read_csv(source + 'primaryMarketNotes_browseNotes_1-RETAIL.csv')
# Print the number of observations
print(f'{len(history)} historical loans')
print(f'{len(current)} current loans')
# Calculate percent of each loan repaid
history['percent_paid'] = history['total_pymnt'] / history['funded_amnt']
# See percent paid for charged off vs fully paid loans
history.groupby('loan_status')['percent_paid'].describe()
###Output
_____no_output_____
###Markdown
Begin with baselines: expected value of random decisions
###Code
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.stats import percentileofscore
import seaborn as sns
from tqdm import tnrange
def simulate(n=10000, grades=['A','B','C','D'],
start_date='2007-07-01',
end_date='2019-03-01'):
"""
What if you picked 40 random loans for $25 investments?
How much would you have been paid back?
Repeat the simulation many times, and plot the distribution
of probable outcomes.
This doesn't consider fees or "time value of money."
"""
condition = ((history['grade'].isin(grades)) &
(history['issue_d'] >= start_date) &
(history['issue_d'] <= end_date))
possible = history[condition]
simulations = []
for _ in tnrange(n):
picks = possible.sample(40).copy()
picks['paid'] = 25 * picks['percent_paid']
paid = picks['paid'].sum()
simulations.append(paid)
simulations = pd.Series(simulations)
sns.distplot(simulations)
plt.axvline(x=1000)
percent = percentileofscore(simulations, 1000)
plt.title(f'{percent}% of simulations did not profit. {start_date}-{end_date}, {grades}')
simulate()
simulate(grades=['A'])
simulate(grades=['D'])
###Output
_____no_output_____
###Markdown
Wrangle data- Engineer date-based features- Remove features to avoid leakage- Do 3-way split, train/validate/test
###Code
# Engineer date-based features
# Transform earliest_cr_line to an integer:
# How many days the earliest credit line was open, before the loan was issued.
# For current loans available for manual investing, assume the loan will be issued today.
history['earliest_cr_line'] = pd.to_datetime(history['earliest_cr_line'], infer_datetime_format=True)
history['earliest_cr_line'] = history['issue_d'] - history['earliest_cr_line']
history['earliest_cr_line'] = history['earliest_cr_line'].dt.days
current['earliest_cr_line'] = pd.to_datetime(current['earliest_cr_line'], infer_datetime_format=True)
current['earliest_cr_line'] = pd.Timestamp.today() - current['earliest_cr_line']
current['earliest_cr_line'] = current['earliest_cr_line'].dt.days
# Transform earliest_cr_line for the secondary applicant
history['sec_app_earliest_cr_line'] = pd.to_datetime(history['sec_app_earliest_cr_line'], infer_datetime_format=True, errors='coerce')
history['sec_app_earliest_cr_line'] = history['issue_d'] - history['sec_app_earliest_cr_line']
history['sec_app_earliest_cr_line'] = history['sec_app_earliest_cr_line'].dt.days
current['sec_app_earliest_cr_line'] = pd.to_datetime(current['sec_app_earliest_cr_line'], infer_datetime_format=True, errors='coerce')
current['sec_app_earliest_cr_line'] = pd.Timestamp.today() - current['sec_app_earliest_cr_line']
current['sec_app_earliest_cr_line'] = current['sec_app_earliest_cr_line'].dt.days
# Engineer features for issue date year & month
history['issue_d_year'] = history['issue_d'].dt.year
history['issue_d_month'] = history['issue_d'].dt.month
current['issue_d_year'] = pd.Timestamp.today().year
current['issue_d_month'] = pd.Timestamp.today().month
# Use Python sets to compare the historical columns & current columns
common_columns = set(history.columns) & set(current.columns)
just_history = set(history.columns) - set(current.columns)
just_current = set(current.columns) - set(history.columns)
# Train on the historical data.
# For features, use only the common columns shared by the historical & current data.
# For the target, use `loan_status` ('Fully Paid' or 'Charged Off')
features = list(common_columns)
target = 'loan_status'
X = history[features]
y = history[target]
# Do train/validate/test 3-way split
from sklearn.model_selection import train_test_split
X_trainval, X_test, y_trainval, y_test = train_test_split(
X, y, test_size=20000, stratify=y, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_trainval, y_trainval, test_size=20000,
stratify=y_trainval, random_state=42)
print('X_train shape', X_train.shape)
print('y_train shape', y_train.shape)
print('X_val shape', X_val.shape)
print('y_val shape', y_val.shape)
print('X_test shape', X_test.shape)
print('y_test shape', y_test.shape)
###Output
X_train shape (88334, 108)
y_train shape (88334,)
X_val shape (20000, 108)
y_val shape (20000,)
X_test shape (20000, 108)
y_test shape (20000,)
###Markdown
Understand why accuracy is a misleading metric when classes are imbalanced Get accuracy score for majority class baseline
###Code
y_train.value_counts(normalize=True)
import numpy as np
from sklearn.metrics import accuracy_score
majority_class = y_train.mode()[0]
y_pred = np.full_like(y_val, fill_value=majority_class)
accuracy_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
Get confusion matrix for majority class baseline
###Code
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
def plot_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='d', cmap='viridis')
plot_confusion_matrix(y_val, y_pred);
###Output
_____no_output_____
###Markdown
Get precision & recall for majority class baseline
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
/anaconda3/lib/python3.7/site-packages/sklearn/metrics/classification.py:1437: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
###Markdown
Get ROC AUC score for majority class baseline[sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html)
###Code
from sklearn.metrics import roc_auc_score
# What if we predicted 100% probability of the positive class for every prediction?
# This is like the majority class baseline, but with predicted probabilities,
# instead of just discrete classes.
# VERY IMPORTANT — Use predicted probabilities with ROC AUC score!
# Because, it's a metric of how well you rank/sort predicted probabilities.
y_pred_proba = np.full_like(y_val, fill_value=1.00)
roc_auc_score(y_val, y_pred_proba)
# ROC AUC is 0.50 by definition when predicting any constant probability value
y_pred_proba = np.full_like(y_val, fill_value=0)
roc_auc_score(y_val, y_pred_proba)
y_pred_proba = np.full_like(y_val, fill_value=0.50)
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Fit a model Count missing values
###Code
null_counts = X_train.isnull().sum().sort_values(ascending=False)
null_counts.reset_index()
many_nulls = null_counts[:73].index
print(list(many_nulls))
###Output
['member_id', 'sec_app_mths_since_last_major_derog', 'sec_app_revol_util', 'sec_app_open_acc', 'sec_app_num_rev_accts', 'sec_app_fico_range_low', 'sec_app_inq_last_6mths', 'sec_app_open_act_il', 'revol_bal_joint', 'sec_app_earliest_cr_line', 'sec_app_chargeoff_within_12_mths', 'sec_app_fico_range_high', 'sec_app_mort_acc', 'sec_app_collections_12_mths_ex_med', 'annual_inc_joint', 'dti_joint', 'desc', 'mths_since_last_record', 'mths_since_recent_bc_dlq', 'mths_since_last_major_derog', 'mths_since_recent_revol_delinq', 'il_util', 'mths_since_rcnt_il', 'open_acc_6m', 'open_act_il', 'total_cu_tl', 'total_bal_il', 'max_bal_bc', 'open_il_12m', 'open_rv_24m', 'open_il_24m', 'inq_last_12m', 'all_util', 'open_rv_12m', 'inq_fi', 'mths_since_last_delinq', 'mths_since_recent_inq', 'num_tl_120dpd_2m', 'mo_sin_old_il_acct', 'emp_title', 'emp_length', 'pct_tl_nvr_dlq', 'avg_cur_bal', 'mo_sin_rcnt_rev_tl_op', 'tot_cur_bal', 'num_tl_90g_dpd_24m', 'tot_coll_amt', 'num_actv_rev_tl', 'num_il_tl', 'num_accts_ever_120_pd', 'total_rev_hi_lim', 'total_il_high_credit_limit', 'num_tl_op_past_12m', 'num_bc_tl', 'num_actv_bc_tl', 'num_tl_30dpd', 'num_rev_accts', 'num_rev_tl_bal_gt_0', 'tot_hi_cred_lim', 'mo_sin_old_rev_tl_op', 'mo_sin_rcnt_tl', 'num_op_rev_tl', 'bc_util', 'percent_bc_gt_75', 'bc_open_to_buy', 'mths_since_recent_bc', 'num_bc_sats', 'num_sats', 'acc_open_past_24mths', 'total_bc_limit', 'total_bal_ex_mort', 'mort_acc', 'title']
###Markdown
Wrangle data
###Code
def wrangle(X):
X = X.copy()
# Engineer new feature for every feature: is the feature null?
for col in X:
X[col+'_NULL'] = X[col].isnull()
# Convert percentages from strings to floats
X['int_rate'] = X['int_rate'].str.strip('%').astype(float)
X['revol_util'] = X['revol_util'].str.strip('%').astype(float)
# Convert employment length from string to float
X['emp_length'] = X['emp_length'].str.replace(r'\D','').astype(float)
# Create features for three employee titles: teacher, manager, owner
X['emp_title'] = X['emp_title'].str.lower()
X['emp_title_teacher'] = X['emp_title'].str.contains('teacher', na=False)
X['emp_title_manager'] = X['emp_title'].str.contains('manager', na=False)
X['emp_title_owner'] = X['emp_title'].str.contains('owner', na=False)
# Get length of free text fields
X['title'] = X['title'].str.len()
X['desc'] = X['desc'].str.len()
X['emp_title'] = X['emp_title'].str.len()
# Convert sub_grade from string "A1"-"D5" to integer 1-20
sub_grade_ranks = {'A1': 1, 'A2': 2, 'A3': 3, 'A4': 4, 'A5': 5, 'B1': 6, 'B2': 7,
'B3': 8, 'B4': 9, 'B5': 10, 'C1': 11, 'C2': 12, 'C3': 13, 'C4': 14,
'C5': 15, 'D1': 16, 'D2': 17, 'D3': 18, 'D4': 19, 'D5': 20}
X['sub_grade'] = X['sub_grade'].map(sub_grade_ranks)
# Drop some columns
X = X.drop(columns='id') # Always unique
X = X.drop(columns='url') # Always unique
X = X.drop(columns='member_id') # Always null
X = X.drop(columns='grade') # Duplicative of sub_grade
X = X.drop(columns='zip_code') # High cardinality
# Only use these features which had nonzero permutation importances in earlier models
features = ['acc_open_past_24mths', 'addr_state', 'all_util', 'annual_inc',
'annual_inc_joint', 'avg_cur_bal', 'bc_open_to_buy', 'bc_util',
'collections_12_mths_ex_med', 'delinq_amnt', 'desc_NULL', 'dti',
'dti_joint', 'earliest_cr_line', 'emp_length', 'emp_length_NULL',
'emp_title', 'emp_title_NULL', 'emp_title_owner', 'fico_range_high',
'funded_amnt', 'home_ownership', 'inq_last_12m', 'inq_last_6mths',
'installment', 'int_rate', 'issue_d_month', 'issue_d_year', 'loan_amnt',
'max_bal_bc', 'mo_sin_old_il_acct', 'mo_sin_old_rev_tl_op',
'mo_sin_rcnt_rev_tl_op', 'mort_acc', 'mths_since_last_major_derog_NULL',
'mths_since_last_record', 'mths_since_recent_bc', 'mths_since_recent_inq',
'num_actv_bc_tl', 'num_actv_rev_tl', 'num_op_rev_tl', 'num_rev_tl_bal_gt_0',
'num_tl_120dpd_2m_NULL', 'open_rv_12m_NULL', 'open_rv_24m',
'pct_tl_nvr_dlq', 'percent_bc_gt_75', 'pub_rec_bankruptcies', 'purpose',
'revol_bal', 'revol_bal_joint', 'sec_app_earliest_cr_line',
'sec_app_fico_range_high', 'sec_app_open_acc', 'sec_app_open_act_il',
'sub_grade', 'term', 'title', 'title_NULL', 'tot_coll_amt',
'tot_hi_cred_lim', 'total_acc', 'total_bal_il', 'total_bc_limit',
'total_cu_tl', 'total_rev_hi_lim']
X = X[features]
# Return the wrangled dataframe
return X
X_train = wrangle(X_train)
X_val = wrangle(X_val)
X_test = wrangle(X_test)
%%time
import category_encoders as ce
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, n_jobs=-1, random_state=42)
)
pipeline.fit(X_train, y_train);
###Output
CPU times: user 1min 2s, sys: 1.28 s, total: 1min 3s
Wall time: 8.72 s
###Markdown
Get accuracy score for model
###Code
y_pred = pipeline.predict(X_val)
accuracy_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
Get confusion matrix for model
###Code
plot_confusion_matrix(y_val, y_pred);
###Output
_____no_output_____
###Markdown
Get precision & recall for model
###Code
print(classification_report(y_val, y_pred))
###Output
precision recall f1-score support
Charged Off 0.54 0.03 0.05 3503
Fully Paid 0.83 1.00 0.90 16497
accuracy 0.83 20000
macro avg 0.68 0.51 0.48 20000
weighted avg 0.78 0.83 0.75 20000
###Markdown
Get ROC AUC score for model
###Code
y_pred_proba = pipeline.predict_proba(X_val)[:, 1]
roc_auc_score(y_val, y_pred_proba)
###Output
_____no_output_____
###Markdown
Understand ROC AUC (Receiver Operating Characteristic, Area Under the Curve) Scikit-Learn docs- [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.htmlreceiver-operating-characteristic-roc)- [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)- [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) More links- [ROC curves and Area Under the Curve explained](https://www.dataschool.io/roc-curves-and-auc-explained/)- [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)[Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings."ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative." ROC AUC measures how well a classifier ranks predicted probabilities. It ranges from 0 to 1. A naive majority class baseline will have an ROC AUC score of 0.5. Visualize the ROC curve by plotting true positive rate vs false positive rate at varying thresholds
###Code
from ipywidgets import interact, fixed
from sklearn.metrics import roc_auc_score, roc_curve
from sklearn.utils.multiclass import unique_labels
def set_threshold(y_true, y_pred_proba, threshold=0.5):
"""
For binary classification problems.
y_pred_proba : predicted probability of class 1
"""
# Apply threshold to predicted probabilities
# to get discrete predictions
class_0, class_1 = unique_labels(y_true)
y_pred = np.full_like(y_true, fill_value=class_0)
y_pred[y_pred_proba > threshold] = class_1
# Plot distribution of predicted probabilities
ax = sns.distplot(y_pred_proba)
ax.axvline(threshold, color='red')
plt.title('Distribution of predicted probabilities')
plt.show()
# Calculate true positive rate and false positive rate
true_positives = (y_pred==y_true) & (y_pred==class_1)
false_positives = (y_pred!=y_true) & (y_pred==class_1)
actual_positives = (y_true==class_1)
actual_negatives = (y_true==class_0)
true_positive_rate = true_positives.sum() / actual_positives.sum()
false_positive_rate = false_positives.sum() / actual_negatives.sum()
print('False Positive Rate', false_positive_rate)
print('True Positive Rate', true_positive_rate)
# Plot ROC curve
fpr, tpr, thresholds = roc_curve(y_true==class_1, y_pred_proba)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# Plot point on ROC curve for the current threshold
plt.scatter(false_positive_rate, true_positive_rate)
plt.show()
# Show ROC AUC score
print('Area under the Receiver Operating Characteristic curve:',
roc_auc_score(y_true, y_pred_proba))
# Show confusion matrix & classification report
plot_confusion_matrix(y_true, y_pred)
print(classification_report(y_true, y_pred))
interact(set_threshold,
y_true=fixed(y_val),
y_pred_proba=fixed(y_pred_proba),
threshold=(0,1,0.05));
###Output
_____no_output_____
###Markdown
Use the class_weight parameter in scikit-learn Here's a fun demo you can explore! The next code cells do five things: 1. Generate dataWe use scikit-learn's [make_classification](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.html) function to generate fake data for a binary classification problem, based on several parameters, including:- Number of samples- Weights, meaning "the proportions of samples assigned to each class."- Class separation: "Larger values spread out the clusters/classes and make the classification task easier."(We are generating fake data so it is easy to visualize.) 2. Split dataWe split the data three ways, into train, validation, and test sets. (For this toy example, it's not really necessary to do a three-way split. A two-way split, or even no split, would be ok. But I'm trying to demonstrate good habits, even in toy examples, to avoid confusion.) 3. Fit modelWe use scikit-learn to fit a [Logistic Regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) on the training data.We use this model parameter:> **class_weight : _dict or ‘balanced’, default: None_**> Weights associated with classes in the form `{class_label: weight}`. If not given, all classes are supposed to have weight one.> The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as `n_samples / (n_classes * np.bincount(y))`. 4. Evaluate modelWe use our Logistic Regression model, which was fit on the training data, to generate predictions for the validation data.Then we print [scikit-learn's Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-report), with many metrics, and also the accuracy score. We are comparing the correct labels to the Logistic Regression's predicted labels, for the validation set. 5. Visualize decision functionBased on these examples- https://imbalanced-learn.readthedocs.io/en/stable/auto_examples/combine/plot_comparison_combine.html- http://rasbt.github.io/mlxtend/user_guide/plotting/plot_decision_regions/example-1-decision-regions-in-2d
###Code
from sklearn.model_selection import train_test_split
def train_validation_test_split(
X, y, train_size=0.8, val_size=0.1, test_size=0.1,
random_state=None, shuffle=True):
assert train_size + val_size + test_size == 1
X_train_val, X_test, y_train_val, y_test = train_test_split(
X, y, test_size=test_size, random_state=random_state, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train_val, y_train_val, test_size=val_size/(train_size+val_size),
random_state=random_state, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test
%matplotlib inline
from IPython.display import display
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.metrics import accuracy_score, classification_report
from sklearn.linear_model import LogisticRegression
from mlxtend.plotting import plot_decision_regions
#1. Generate data
# Try re-running the cell with different values for these parameters
n_samples = 1000
weights = (0.95, 0.05)
class_sep = 0.8
X, y = make_classification(n_samples=n_samples, n_features=2, n_informative=2,
n_redundant=0, n_repeated=0, n_classes=2,
n_clusters_per_class=1, weights=weights,
class_sep=class_sep, random_state=0)
# 2. Split data
# Uses our custom train_validation_test_split function
X_train, X_val, X_test, y_train, y_val, y_test = train_validation_test_split(
X, y, train_size=0.8, val_size=0.1, test_size=0.1, random_state=1)
# 3. Fit model
# Try re-running the cell with different values for this parameter
class_weight = None
model = LogisticRegression(solver='lbfgs', class_weight=class_weight)
model.fit(X_train, y_train)
# 4. Evaluate model
y_pred = model.predict(X_val)
print(classification_report(y_val, y_pred))
plot_confusion_matrix(y_val, y_pred)
# 5. Visualize decision regions
plt.figure(figsize=(10, 6))
plot_decision_regions(X_val, y_val, model, legend=0);
###Output
precision recall f1-score support
0 0.98 1.00 0.99 96
1 1.00 0.50 0.67 4
accuracy 0.98 100
macro avg 0.99 0.75 0.83 100
weighted avg 0.98 0.98 0.98 100
|
aula_2/regressao_linear.ipynb | ###Markdown
Regressão linearRegressão é a tarefa de encontrar uma função que aproxima um conjunto de dados, de forma que a variável objetivo é contínua (ou seja, a função tem como imagem $\mathbb{R}$). Neste exemplo, trataremos de um conjunto de dados onde a variável objetivo é o preço mediano das casas em um determinado bairro, e temos acesso a uma única variável preditora que é a porcentagem da população daquele bairro que é considerada de baixa renda.Intuitivamente, sabemos que bairros com maior concentração de pessoas de baixa renda tem algum tipo de relação com um menor preço das casas naquele bairro. Começaremos olhando para os nossos dados para ter uma ideia geral do problema que estamos lidando. A biblioteca `pandas` nos dá acesso a funções de tratamento de dados tabulares. Utilizamos essa biblioteca para importar nossos dados, olhar algumas estatísticas básicas e visualizar os dados.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import pearsonr
from sklearn.linear_model import LinearRegression, Lasso
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from sklearn.datasets import load_boston
%matplotlib inline
boston = load_boston()
dados = pd.DataFrame({'baixa renda': boston.data[:, -1],
'preço': boston.target})
dados.head()
dados = dados.sort_values(by='baixa renda')
dados.head()
dados.shape
dados.describe()
dados.plot.scatter('baixa renda', 'preço');
dados.hist();
dados.hist(bins=5);
###Output
_____no_output_____
###Markdown
Coeficiente de correlaçãoO coeficiente de correlação de Pearson, muitas vezes chamado somente de correlação ou de Pearson r, é uma estatística que calcula o quanto uma variável é linearmente dependente de outra. O valor varia entre -1 (indicando uma correlação perfeita negativa, ou seja, o aumento da variável independente causa uma redução na variável dependente) e 1 (similarmente uma correlação perfeita, mas positiva). O valor zero indica que não há correlação linear entre as variáveis.Existem outros tipos de correlação para outros casos, mas o coeficiente de Pearson é utilizado para dependências lineares.Em Python, temos a função de cálculo de correlação `pearsonr` dentro do pacote `scipy.stats`. A função retorna um par de números, sendo que o primeiro é o coeficiente de correlação (o segundo número é o p-valor, que ignoraremos).
###Code
pearsonr(dados['baixa renda'], dados['preço'])[0]
renda = dados['baixa renda'].values.reshape(-1, 1)
preco = dados['preço'].values.reshape(-1, 1)
###Output
_____no_output_____
###Markdown
Algoritmos de regressãoComo estamos interessados em regressão linear, os algoritmos utilizados retornarão uma reta, geralmente representada pela equação $y = mx + y_0$, onde estamos interessados em encontrar os valores $m$ e $y_0$. Na biblioteca `sklearn`, esses valores são chamados de `coef_` e `intercept_`. Regressão linear simples
###Code
regressor = LinearRegression()
regressor.fit(renda, preco)
y0 = regressor.intercept_
m = regressor.coef_[0]
y0, m
plt.scatter(renda, preco, marker='.')
x = np.linspace(0, 40, 2).reshape(-1, 1)
plt.plot(x, regressor.predict(x), c='r')
plt.axvline(0, c='gray')
f0 = regressor.predict(np.array([0]).reshape(-1, 1))
plt.axhline(f0, c='gray')
f0
###Output
_____no_output_____
###Markdown
Avaliando uma regressãoExistem diversos cálculos para avaliar a qualidade de uma regressão. Comumente utilizados temos o fator $R^2$, a média dos erros absolutos, e a média dos erros quadrados. Todas essas funções estão disponíveis no módulo `sklearn.metrics`.O fator $R^2$, chamado de coeficiente de determinação, pode ser visto como quanto uma variável tem poder de predição sobre a outra. Para o caso da regressão linear, o $R^2$ é equivalente ao quadrado do coeficiente de Pearson.Utilizaremos a função `predict` do nosso regressor para encontrar os valores de `y` da reta nos pontos específicos onde há exemplos.
###Code
predito = regressor.predict(renda)
r2_score(preco, predito)
pearsonr(preco, renda)[0] ** 2
mean_absolute_error(preco, predito)
mean_squared_error(preco, predito)
###Output
_____no_output_____
###Markdown
Regressão linear com penalidade L1 (Lasso)Outros algoritmo comuns para regressão são Lasso e Ridge, que utilizam as penalidades L1 e L2, respectivamente, para regularizar os dados. Também existe o Elastic Net, que combina as duas penalidades.Esse tipo de regularização fazem com que valores muito fora do comum (_outliers_) sejam menos considerados, e os coeficientes tendem a ficar mais simples. Essa regularização é muito comum para quando há mais de uma variável preditora, mas utilizaremos aqui para mostrar o funcionamento.O algoritmo Lasso utiliza um hiper-parâmetro $\alpha$ (alfa), com valor padrão 1.0, que determina quanto de penalidade deve ser aplicado. Quando esse valor tende a zero, o algoritmo se aproxima do algoritmo de regressão simples (mas não é recomendado fazer isso pois há instabilidade numérica).
###Code
regressor = Lasso(alpha=1.0).fit(renda, preco)
y0 = regressor.intercept_
m = regressor.coef_
print(y0, m)
x = np.linspace(0, 40, 2).reshape(-1, 1)
plt.scatter(renda, preco, marker='.')
plt.plot(x, regressor.predict(x), color='red');
predito = regressor.predict(renda)
print('R^2', r2_score(preco, predito))
print('MAE', mean_absolute_error(preco, predito))
print('MSE', mean_squared_error(preco, predito))
###Output
_____no_output_____
###Markdown
Como podemos ver, o coeficiente mudou muito pouco, de -0.95 para -0.93. Os valores de erro também continuam praticamente iguais. Tentemos agora com valores diferentes de $\alpha$.
###Code
cores = ['red', 'green', 'black', 'gray']
alfas = [0.1, 0.5, 0.8, 1.5]
plt.scatter(renda, preco, marker='.')
for cor, alfa in zip(cores, alfas):
regressor = Lasso(alpha=alfa).fit(renda, preco)
y0 = regressor.intercept_
m = regressor.coef_[0]
predito = regressor.predict(renda)
r2 = r2_score(preco, predito)
mae = mean_absolute_error(preco, predito)
mse = mean_squared_error(preco, predito)
print(f'''
y0: {y0}
m: {m}
R^2: {r2}
Erro absoluto: {mae}
Erro quadrado: {mse}
''')
plt.plot(x, regressor.predict(x), color=cor)
###Output
_____no_output_____
###Markdown
Alterações em $\alpha$ causam poucas variações da reta gerada.É importante notar que as avaliações de algoritmos de aprendizado geralmente não são aplicadas no mesmo conjunto de dados utilizado para treinar o algoritmo, como estamos fazendo aqui. Mais para frente utilizaremos avaliações menos ingênuas. Alteração de atributos / atributos não-linearesUm problema comum em ciência de dados é encontrar os atributos corretos. Muitas vezes, podemos aplicar funções matemáticas para alterar ou combinar os atributos que temos em mãos para encontrar modelos superiores.Podemos pensar, existe algum outro tipo de função que se adequaria melhor aos nossos dados do que uma reta? Para quem está acostumado com a função $\log$, a distribuição dos nossos dados se assemelha um pouco com a função $\log$ aplicada no inverso do número, como podemos visualizar abaixo.
###Code
x = np.linspace(3, 100, 100)
y = np.log(1 / x)
plt.plot(x, y);
###Output
_____no_output_____
###Markdown
Como podemos fazer para ajustar uma curva logarítmica aos nossos dados? A resposta é que não precisamos! Basta alterar os atributos para que estejam em espaço log, e então aplicar uma regressão linear.
###Code
plt.scatter(np.log(renda), preco, marker='.');
pearsonr(np.log(renda), preco)[0]
###Output
_____no_output_____
###Markdown
Como podemos ver, o coeficiente de correlação foi de -0.737 para -0.815, indicando que agora há uma relação _linear_ mais forte entre as variáveis.
###Code
cores = ['red', 'green', 'black', 'gray', 'yellow']
alfas = [0.1, 0.5, 0.8, 1.0, 1.5]
plt.scatter(np.log(renda), preco, marker='.')
for cor, alfa in zip(cores, alfas):
regressor = Lasso(alpha=alfa).fit(np.log(renda), preco)
y0 = regressor.intercept_
m = regressor.coef_[0]
predito = regressor.predict(np.log(renda))
r2 = r2_score(preco, predito)
mae = mean_absolute_error(preco, predito)
mse = mean_squared_error(preco, predito)
print(f'''
y0: {y0}
m: {m}
R^2: {r2}
Erro absoluto: {mae}
Erro quadrado: {mse}
''')
x = np.linspace(0, np.log(40), 200).reshape(-1, 1)
plt.plot(x, regressor.predict(x), color=cor)
regressor = LinearRegression().fit(np.log(renda), preco)
x = np.linspace(renda.min(), renda.max(), 300).reshape(-1, 1)
y = regressor.predict(np.log(x))
plt.scatter(renda, preco, marker='.')
plt.plot(x, y, c='r');
###Output
_____no_output_____
###Markdown
Regressão polinomialContraintuitivamente, regressão polinomial é só um caso específico de regressão linear. A regressão linear pode ser, e geralmente é feita, utilizando mais de uma variável preditora, permitindo modelos mais complexos. Nesse caso, a equação da reta em múltiplas dimensões passa a ser $y = y_0 + m_0x_0 + m_1x_1 + m_2x_2 + \ldots + m_nx_n$. Os valores de $x$ são os valores observados das variáveis; o algoritmo de regressão é responsável por achar o valor $y_0$ e todos os valores dos coeficientes $m$.Consideremos agora que temos somente uma variável preditora, como no conjunto de dados que estamos tratando. Podemos modificar a variável preditora como fizemos com a operação $\log$, gerando várias variáveis que podem ser usadas em uma regressão linear multidimensional, como acima. No caso específico que modificamos nossa variável preditora $x$ utilizando suas potências, ficamos com as variáveis preditoras $x, x^2, x^3, ..., x^n$, para um determinado $n$ que escolhermos. Assim, a equação da "reta" ajustada passa a ser $y = y_0 + m_0x + m_1x^2 + m_2x^3 + ... + m_nx^{n+1}$, um polinômio de grau $n+1$.Vamos começar ajustando um polinômio de grau 2 (uma parábola) aos nossos dados, e comparar com um polinômio de grau 14.Trivia: o nome regressão linear se refere a uma combinação linear entre as variáveis, e não porque se ajusta uma linha. Por isso a regressão polinomial é uma regressão linear. A equação da regressão polinomial no primeiro parágrafo dessa célula também pode ser escrita como $y - y_0 = \vec{m} \cdot \vec{x} = \sum m_i x_i$, onde o lado direito da equação é a forma canônica da combinação linear. Mesmo que as variáveis sejam alteradas por diversas funções não-lineares, como fizemos com $\log$ anteriormente, a combinação entre os atributos é sempre feita nessa forma linear, multiplicando os coeficientes pelos valores de atributos, e somando o resultado. No caso polinomial, temos $x_i = x^i$.
###Code
x1 = renda
x2 = renda ** 2
x = np.hstack([x1, x2])
regressor = LinearRegression().fit(x, preco)
y0 = regressor.intercept_
m = regressor.coef_
print(y0, m)
curva = regressor.predict(x)
print(r2_score(preco, curva), mean_absolute_error(preco, curva), mean_squared_error(preco, curva))
plt.scatter(renda, preco)
plt.plot(renda, curva, c='red');
x_l = [renda]
for potencia in range(2, 15):
x_l.append(renda ** potencia)
x = np.hstack(x_l)
regressor = LinearRegression().fit(x, preco)
print(regressor.coef_)
curva = regressor.predict(x)
print(r2_score(preco, curva), mean_absolute_error(preco, curva), mean_squared_error(preco, curva))
plt.scatter(renda, preco)
plt.plot(renda, curva, c='red');
###Output
_____no_output_____
###Markdown
Apesar do polinômio de grau 14 ser um modelo mais complexo, seu erro é maior que os polinômios de grau 1 (reta) e 2. Isso acontece porque o modelo está _superajustado_ (_overfitted_) aos dados representados, e não generaliza a tendência dos dados. Nesses casos, podemos utilizar as penalidades L1 e L2 como discutidos anteriormente.(Pode ser que um aviso de `ConvergenceWarning` seja mostrado; não é um erro do programa, mas sim um aviso de que o resultado pode ser numericamente instável devido à alta dimensão).
###Code
cores = ['red', 'green', 'black', 'gray', 'yellow']
alfas = [0.1, 0.5, 0.8, 1.0, 1.5]
plt.scatter(renda, preco)
for cor, alfa in zip(cores, alfas):
regressor = Lasso(alpha=alfa)
regressor.fit(x, preco)
predito = regressor.predict(x)
print(r2_score(preco, predito), mean_squared_error(preco, predito))
graphx = np.linspace(0, np.log(40), 200).reshape(-1, 1)
plt.plot(renda, predito, color=cor)
###Output
_____no_output_____
###Markdown
Regressão linearRegressão é a tarefa de encontrar uma função que aproxima um conjunto de dados, de forma que a variável objetivo é contínua (ou seja, a função tem como imagem $\mathbb{R}$). Neste exemplo, trataremos de um conjunto de dados onde a variável objetivo é o preço mediano das casas em um determinado bairro, e temos acesso a uma única variável preditora que é a porcentagem da população daquele bairro que é considerada de baixa renda.Intuitivamente, sabemos que bairros com maior concentração de pessoas de baixa renda tem algum tipo de relação com um menor preço das casas naquele bairro. Começaremos olhando para os nossos dados para ter uma ideia geral do problema que estamos lidando. A biblioteca `pandas` nos dá acesso a funções de tratamento de dados tabulares. Utilizamos essa biblioteca para importar nossos dados, olhar algumas estatísticas básicas e visualizar os dados.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import pearsonr
from sklearn.linear_model import LinearRegression, Lasso
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from sklearn.datasets import load_boston
%matplotlib inline
boston = load_boston()
dados = pd.DataFrame({'baixa renda': boston.data[:, -1],
'preço': boston.target})
dados.head()
dados = dados.sort_values(by='baixa renda')
dados.head()
dados.shape
dados.describe()
dados.plot.scatter('baixa renda', 'preço');
dados.hist();
dados.hist(bins=5);
###Output
_____no_output_____
###Markdown
Coeficiente de correlaçãoO coeficiente de correlação de Pearson, muitas vezes chamado somente de correlação ou de Pearson r, é uma estatística que calcula o quanto uma variável é linearmente dependente de outra. O valor varia entre -1 (indicando uma correlação perfeita negativa, ou seja, o aumento da variável independente causa uma redução na variável dependente) e 1 (similarmente uma correlação perfeita, mas positiva). O valor zero indica que não há correlação linear entre as variáveis.Existem outros tipos de correlação para outros casos, mas o coeficiente de Pearson é utilizado para dependências lineares.Em Python, temos a função de cálculo de correlação `pearsonr` dentro do pacote `scipy.stats`. A função retorna um par de números, sendo que o primeiro é o coeficiente de correlação (o segundo número é o p-valor, que ignoraremos).
###Code
pearsonr(dados['baixa renda'], dados['preço'])[0]
renda = dados['baixa renda'].values.reshape(-1, 1)
preco = dados['preço'].values.reshape(-1, 1)
###Output
_____no_output_____
###Markdown
Algoritmos de regressãoComo estamos interessados em regressão linear, os algoritmos utilizados retornarão uma reta, geralmente representada pela equação $y = mx + y_0$, onde estamos interessados em encontrar os valores $m$ e $y_0$. Na biblioteca `sklearn`, esses valores são chamados de `coef_` e `intercept_`. Regressão linear simples
###Code
regressor = LinearRegression()
regressor.fit(renda, preco)
y0 = regressor.intercept_
m = regressor.coef_[0]
y0, m
plt.scatter(renda, preco, marker='.')
x = np.linspace(0, 40, 2).reshape(-1, 1)
plt.plot(x, regressor.predict(x), c='r')
plt.axvline(0, c='gray')
f0 = regressor.predict(np.array([0]).reshape(-1, 1))
plt.axhline(f0, c='gray')
f0
###Output
_____no_output_____
###Markdown
Avaliando uma regressãoExistem diversos cálculos para avaliar a qualidade de uma regressão. Comumente utilizados temos o fator $R^2$, a média dos erros absolutos, e a média dos erros quadrados. Todas essas funções estão disponíveis no módulo `sklearn.metrics`.O fator $R^2$, chamado de coeficiente de determinação, pode ser visto como quanto uma variável tem poder de predição sobre a outra. Para o caso da regressão linear, o $R^2$ é equivalente ao quadrado do coeficiente de Pearson.Utilizaremos a função `predict` do nosso regressor para encontrar os valores de `y` da reta nos pontos específicos onde há exemplos.
###Code
predito = regressor.predict(renda)
r2_score(preco, predito)
pearsonr(preco, renda)[0] ** 2
mean_absolute_error(preco, predito)
mean_squared_error(preco, predito)
###Output
_____no_output_____
###Markdown
Regressão linear com penalidade L1 (Lasso)Outros algoritmo comuns para regressão são Lasso e Ridge, que utilizam as penalidades L1 e L2, respectivamente, para regularizar os dados. Também existe o Elastic Net, que combina as duas penalidades.Esse tipo de regularização fazem com que valores muito fora do comum (_outliers_) sejam menos considerados, e os coeficientes tendem a ficar mais simples. Essa regularização é muito comum para quando há mais de uma variável preditora, mas utilizaremos aqui para mostrar o funcionamento.O algoritmo Lasso utiliza um hiper-parâmetro $\alpha$ (alfa), com valor padrão 1.0, que determina quanto de penalidade deve ser aplicado. Quando esse valor tende a zero, o algoritmo se aproxima do algoritmo de regressão simples (mas não é recomendado fazer isso pois há instabilidade numérica).
###Code
regressor = Lasso(alpha=1.0).fit(renda, preco)
y0 = regressor.intercept_
m = regressor.coef_
print(y0, m)
x = np.linspace(0, 40, 2).reshape(-1, 1)
plt.scatter(renda, preco, marker='.')
plt.plot(x, regressor.predict(x), color='red');
predito = regressor.predict(renda)
print('R^2', r2_score(preco, predito))
print('MAE', mean_absolute_error(preco, predito))
print('MSE', mean_squared_error(preco, predito))
###Output
R^2 0.5439135471381261
MAE 4.497442134682039
MSE 38.502615919439314
###Markdown
Como podemos ver, o coeficiente mudou muito pouco, de -0.95 para -0.93. Os valores de erro também continuam praticamente iguais. Tentemos agora com valores diferentes de $\alpha$.
###Code
cores = ['red', 'green', 'black', 'gray']
alfas = [0.1, 0.5, 0.8, 1.5]
plt.scatter(renda, preco, marker='.')
for cor, alfa in zip(cores, alfas):
regressor = Lasso(alpha=alfa).fit(renda, preco)
y0 = regressor.intercept_
m = regressor.coef_[0]
predito = regressor.predict(renda)
r2 = r2_score(preco, predito)
mae = mean_absolute_error(preco, predito)
mse = mean_squared_error(preco, predito)
print(f'''
y0: {y0}
m: {m}
R^2: {r2}
Erro absoluto: {mae}
Erro quadrado: {mse}
''')
plt.plot(x, regressor.predict(x), color=cor)
###Output
y0: [34.52897927]
m: -0.9480844848034748
R^2: 0.5441439700819961
Erro absoluto: 4.504320887620784
Erro quadrado: 38.483163716789605
y0: [34.42953282]
m: -0.9402250089854104
R^2: 0.5440881099743914
Erro absoluto: 4.500807587758577
Erro quadrado: 38.48787940228044
y0: [34.35494799]
m: -0.9343304021218622
R^2: 0.5439973372995335
Erro absoluto: 4.498549346886957
Erro quadrado: 38.49554239120306
y0: [34.18091671]
m: -0.9205763194402496
R^2: 0.5436226090776842
Erro absoluto: 4.494917885692384
Erro quadrado: 38.527176781370756
###Markdown
Alterações em $\alpha$ causam poucas variações da reta gerada.É importante notar que as avaliações de algoritmos de aprendizado geralmente não são aplicadas no mesmo conjunto de dados utilizado para treinar o algoritmo, como estamos fazendo aqui. Mais para frente utilizaremos avaliações menos ingênuas. Alteração de atributos / atributos não-linearesUm problema comum em ciência de dados é encontrar os atributos corretos. Muitas vezes, podemos aplicar funções matemáticas para alterar ou combinar os atributos que temos em mãos para encontrar modelos superiores.Podemos pensar, existe algum outro tipo de função que se adequaria melhor aos nossos dados do que uma reta? Para quem está acostumado com a função $\log$, a distribuição dos nossos dados se assemelha um pouco com a função $\log$ aplicada no inverso do número, como podemos visualizar abaixo.
###Code
x = np.linspace(3, 100, 100)
y = np.log(1 / x)
plt.plot(x, y);
###Output
_____no_output_____
###Markdown
Como podemos fazer para ajustar uma curva logarítmica aos nossos dados? A resposta é que não precisamos! Basta alterar os atributos para que estejam em espaço log, e então aplicar uma regressão linear.
###Code
plt.scatter(np.log(renda), preco, marker='.');
pearsonr(np.log(renda), preco)[0]
###Output
_____no_output_____
###Markdown
Como podemos ver, o coeficiente de correlação foi de -0.737 para -0.815, indicando que agora há uma relação _linear_ mais forte entre as variáveis.
###Code
cores = ['red', 'green', 'black', 'gray', 'yellow']
alfas = [0.1, 0.5, 0.8, 1.0, 1.5]
plt.scatter(np.log(renda), preco, marker='.')
for cor, alfa in zip(cores, alfas):
regressor = Lasso(alpha=alfa).fit(np.log(renda), preco)
y0 = regressor.intercept_
m = regressor.coef_[0]
predito = regressor.predict(np.log(renda))
r2 = r2_score(preco, predito)
mae = mean_absolute_error(preco, predito)
mse = mean_squared_error(preco, predito)
print(f'''
y0: {y0}
m: {m}
R^2: {r2}
Erro absoluto: {mae}
Erro quadrado: {mse}
''')
x = np.linspace(0, np.log(40), 200).reshape(-1, 1)
plt.plot(x, regressor.predict(x), color=cor)
regressor = LinearRegression().fit(np.log(renda), preco)
x = np.linspace(renda.min(), renda.max(), 300).reshape(-1, 1)
y = regressor.predict(np.log(x))
plt.scatter(renda, preco, marker='.')
plt.plot(x, y, c='r');
###Output
_____no_output_____
###Markdown
Regressão polinomialContraintuitivamente, regressão polinomial é só um caso específico de regressão linear. A regressão linear pode ser, e geralmente é feita, utilizando mais de uma variável preditora, permitindo modelos mais complexos. Nesse caso, a equação da reta em múltiplas dimensões passa a ser $y = y_0 + m_0x_0 + m_1x_1 + m_2x_2 + \ldots + m_nx_n$. Os valores de $x$ são os valores observados das variáveis; o algoritmo de regressão é responsável por achar o valor $y_0$ e todos os valores dos coeficientes $m$.Consideremos agora que temos somente uma variável preditora, como no conjunto de dados que estamos tratando. Podemos modificar a variável preditora como fizemos com a operação $\log$, gerando várias variáveis que podem ser usadas em uma regressão linear multidimensional, como acima. No caso específico que modificamos nossa variável preditora $x$ utilizando suas potências, ficamos com as variáveis preditoras $x, x^2, x^3, ..., x^n$, para um determinado $n$ que escolhermos. Assim, a equação da "reta" ajustada passa a ser $y = y_0 + m_0x + m_1x^2 + m_2x^3 + ... + m_nx^{n+1}$, um polinômio de grau $n+1$.Vamos começar ajustando um polinômio de grau 2 (uma parábola) aos nossos dados, e comparar com um polinômio de grau 14.Trivia: o nome regressão linear se refere a uma combinação linear entre as variáveis, e não porque se ajusta uma linha. Por isso a regressão polinomial é uma regressão linear. A equação da regressão polinomial no primeiro parágrafo dessa célula também pode ser escrita como $y - y_0 = \vec{m} \cdot \vec{x} = \sum m_i x_i$, onde o lado direito da equação é a forma canônica da combinação linear. Mesmo que as variáveis sejam alteradas por diversas funções não-lineares, como fizemos com $\log$ anteriormente, a combinação entre os atributos é sempre feita nessa forma linear, multiplicando os coeficientes pelos valores de atributos, e somando o resultado. No caso polinomial, temos $x_i = x^i$.
###Code
x1 = renda
x2 = renda ** 2
x = np.hstack([x1, x2])
regressor = LinearRegression().fit(x, preco)
y0 = regressor.intercept_
m = regressor.coef_
print(y0, m)
curva = regressor.predict(x)
print(r2_score(preco, curva), mean_absolute_error(preco, curva), mean_squared_error(preco, curva))
plt.scatter(renda, preco)
plt.plot(renda, curva, c='red');
x_l = [renda]
for potencia in range(2, 15):
x_l.append(renda ** potencia)
x = np.hstack(x_l)
regressor = LinearRegression().fit(x, preco)
print(regressor.coef_)
curva = regressor.predict(x)
print(r2_score(preco, curva), mean_absolute_error(preco, curva), mean_squared_error(preco, curva))
plt.scatter(renda, preco)
plt.plot(renda, curva, c='red');
###Output
[[-5.91977860e-14 -4.25344953e-10 -1.39487489e-11 -1.46280246e-10
-1.44814744e-09 -1.20507287e-08 -7.60419750e-08 -2.74351002e-07
5.66824695e-08 -4.76107739e-09 2.11175591e-10 -5.22835908e-12
6.85482212e-14 -3.71910590e-16]]
0.4958010058030722 4.795732015957237 42.564255304489734
###Markdown
Apesar do polinômio de grau 14 ser um modelo mais complexo, seu erro é maior que os polinômios de grau 1 (reta) e 2. Isso acontece porque o modelo está _superajustado_ (_overfitted_) aos dados representados, e não generaliza a tendência dos dados. Nesses casos, podemos utilizar as penalidades L1 e L2 como discutidos anteriormente.(Pode ser que um aviso de `ConvergenceWarning` seja mostrado; não é um erro do programa, mas sim um aviso de que o resultado pode ser numericamente instável devido à alta dimensão).
###Code
cores = ['red', 'green', 'black', 'gray', 'yellow']
alfas = [0.1, 0.5, 0.8, 1.0, 1.5]
plt.scatter(renda, preco)
for cor, alfa in zip(cores, alfas):
regressor = Lasso(alpha=alfa)
regressor.fit(x, preco)
predito = regressor.predict(x)
print(r2_score(preco, predito), mean_squared_error(preco, predito))
graphx = np.linspace(0, np.log(40), 200).reshape(-1, 1)
plt.plot(renda, predito, color=cor)
###Output
/home/chinen/.pyenv/versions/3.6.6/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
|
collection_I_need_to_sort/Reinforcement learning implementation.ipynb | ###Markdown
Initiate field
###Code
dims = (3,4)
field = np.ones(dims)
field[1,1] = np.nan
plt.imshow(field,interpolation='nearest',cmap='PuOr')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Initiate parameters
###Code
# all possible states (= all coordinates except for (1,1))
S = [(x,y) for x in range(dims[0]) for y in range(dims[1]) if not field[(x,y)]==np.nan]
# all possible actions (= (E,W,S,N)) (arrays 'cause will only be needed for addition)
A = [[0,1],[0,-1],[1,0],[-1,0]]
A = [Ar(x) for x in A]
# probabilities (here all equal)
Psa = Ar([0.25]*4)
# cost of step
g = 0.9
# reward function: everywhere = 0.02, win = 1, loose = -1
R = deepcopy(field)*(-0.02)
R[0,3] = 1
R[1,3] = -1
plt.imshow(R,interpolation='nearest',cmap='PuOr')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Initiate value function (0 everywhere)
###Code
def get_Psa(s,a): #here pretty easy: states not probabilistic
s_prime = bounce_border(s+a)
Psa = np.zeros(dims)
Psa[s_prime]=1
return Psa
def get_reward_for_best_action(s,A,V_est):
future_reward = []
for a in A:
Psa = get_Psa(s,a)
future_reward.append(np.nansum(Psa*V_est))
#which action maximises future reward?
best_action = np.where(future_reward==max(future_reward))[0]
#if multiple lead to same result: random choice
if len(best_action)>1:
best_action = np.random.choice(best_action)
return future_reward[int(best_action)]
V_est = np.zeros([3,4])
for iteration in range(100):
# get new value for each state
V_new = np.zeros([3,4])
for s in S:
max_a = get_reward_for_best_action(s,A,V_est)
V_new[s] = R[s]+g*max_a
V_est = deepcopy(V_new)
V_est[0,3] = 1
V_est[1,3] = -1
V_est
###Output
_____no_output_____
###Markdown
Define function to get all possible actions given a state
###Code
def bounce_border(s):
s[0] = max(0,s[0])
s[0] = min(2,s[0])
s[1] = max(0,s[1])
s[1] = min(3,s[1])
return tuple(s)
###Output
_____no_output_____
###Markdown
asdfk;lj
###Code
V_est = np.zeros([3,4])
for iteration in range(1000):
# get new value for each state
V_new = np.zeros([3,4])
for s in S:
# for each state: what is the optimal action?
max_a = get_summed_value(s,V_est)
V_new[s] = R[s]+g*max_a
V_est = deepcopy(V_new)
V_est[0,3] = 1
V_est[1,3] = -1
V_est
plt.imshow(V_est,interpolation='nearest',cmap='PuOr')
plt.colorbar()
###Output
_____no_output_____ |
workbook_3.ipynb | ###Markdown
Importing Libraries
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib as plt
###Output
_____no_output_____
###Markdown
Preprocessing data
###Code
df=pd.read_csv('train.csv')
df.shape
df.head()
df.isnull().sum()
df['cancelled'].value_counts()
figure=df['alloted_orders'].hist(bins=50)
figure=df['delivered_orders'].hist(bins=50)
figure=df['undelivered_orders'].hist(bins=50)
figure=df['lifetime_order_count'].hist(bins=50)
figure=df['first_mile_distance'].hist(bins=50)
figure=df['last_mile_distance'].hist(bins=50)
df.describe()
sns.boxplot(df['first_mile_distance'])
df['first_mile_distance']=np.where(df['first_mile_distance']>4,1.853000,df['first_mile_distance'])
sns.boxplot(df['first_mile_distance'])
sns.boxplot(df['last_mile_distance'])
df['last_mile_distance']=np.where(df['last_mile_distance']>8,4.220000,df['last_mile_distance'])
sns.boxplot(df['last_mile_distance'])
df.skew(axis=0).sort_values(ascending=False)
sns.boxplot(df['alloted_orders'])
df['alloted_orders']=np.where(df['alloted_orders']>310,147.000000,df['alloted_orders'])
sns.boxplot(df['alloted_orders'])
sns.boxplot(df['delivered_orders'])
df['delivered_orders']=np.where(df['delivered_orders']>310,146.000000,df['delivered_orders'])
sns.boxplot(df['delivered_orders'])
df.skew(axis=0).sort_values(ascending=False)
sns.boxplot(df['undelivered_orders'])
df['undelivered_orders']=np.where(df['undelivered_orders']>2,1,df['undelivered_orders'])
sns.boxplot(df['undelivered_orders'])
###Output
/Users/abhaylal/opt/anaconda3/lib/python3.8/site-packages/seaborn/_decorators.py:36: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
###Markdown
Dropping the 4 columns with maximum missing values
###Code
df.drop(columns=['reassignment_method','reassignment_reason','reassigned_order','cancelled_time'],axis=1,inplace=True)
df.drop(columns=['order_time','allot_time','accept_time','pickup_time','delivered_time'],axis=1,inplace=True)
df.shape
df['delivered_orders'].isnull().sum()
df['undelivered_orders'].isnull().sum()
###Output
_____no_output_____
###Markdown
Delivered,undelivered orders contain null values to be dropped
###Code
df.isnull().sum()
df.describe()
df_new=df.dropna(axis=0) # dropping null values in all the rows
df_new.isnull().sum()
###Output
_____no_output_____
###Markdown
No more missing values in dataframe
###Code
df_new.shape
df_new.head()
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(font_scale=2)
fig, ax = plt.subplots(figsize=(35,20))
sns.heatmap(df_new.corr(),annot=True)
#df_new.drop(columns=['lifetime_order_count','session_time','alloted_orders','delivered_orders'],axis=1,inplace=True)
df_new['cancelled'].value_counts()
figure=df_new['alloted_orders'].hist(bins=50)
figure.set_title('Fare')
figure.set_xlabel('Fare')
figure.set_ylabel('No of passenger')
###Output
_____no_output_____
###Markdown
Outliers removal
###Code
X=df_new.drop(columns=['order_date','cancelled'],axis=1)
y=df_new['cancelled']
X
import seaborn as sns
fig, ax = plt.subplots(figsize=(25,10))
sns.boxplot(X['session_time'])
X[X['session_time']>=650]
sns.boxplot(X['first_mile_distance'])
from scipy import stats
import numpy as np
z = np.abs(stats.zscore(X))
threshold = 3
a=X.iloc[np.where(z > 3)].index.values
print(a.shape)
print(X.shape)
X.drop(a, inplace=True)
print(a.shape)
print(X.shape)
y.drop(a,inplace=True)
y.shape
y.value_counts()
a.shape
X
###Output
_____no_output_____
###Markdown
Unsampling
###Code
from sklearn.datasets import make_classification
from imblearn.under_sampling import NearMiss
#nm = NearMiss()
#X,y=nm.fit_resample(X, y)
from imblearn.under_sampling import RandomUnderSampler
undersample = RandomUnderSampler(sampling_strategy=0.5)
X, y= undersample.fit_resample(X, y)
print(X.shape)
print(y.shape)
print(y.value_counts())
X
y
y.value_counts()
sns.boxplot(X['first_mile_distance'])
X
y.shape
###Output
_____no_output_____
###Markdown
NEW
###Code
df_new
X=df_new.drop(columns=['cancelled','order_date'])
y=df_new['cancelled']
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score,confusion_matrix,classification_report
from sklearn.model_selection import KFold
import numpy as np
from sklearn.model_selection import GridSearchCV
log_class=LogisticRegression()
grid={'C':10.0 **np.arange(-2,3),'penalty':['l1','l2']}
cv=KFold(n_splits=5,random_state=None,shuffle=False)
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,train_size=0.7)
clf=GridSearchCV(log_class,grid,cv=cv,n_jobs=-1,scoring='f1_macro')
clf.fit(X_train,y_train)
y_pred=clf.predict(X_test)
print(confusion_matrix(y_test,y_pred))
print(accuracy_score(y_test,y_pred))
print(classification_report(y_test,y_pred,zero_division=1))
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, average='weighted', labels=np.unique(y_pred))
y_train.value_counts()
class_weight=dict({0:1,1:100})
from sklearn.ensemble import RandomForestClassifier
classifier=RandomForestClassifier(class_weight=class_weight)
classifier.fit(X_train,y_train)
y_pred=classifier.predict(X_test)
print(confusion_matrix(y_test,y_pred))
print(accuracy_score(y_test,y_pred))
print(classification_report(y_test,y_pred))
from imblearn.over_sampling import RandomOverSampler
os=RandomOverSampler(0.75)
X_train_ns,y_train_ns=os.fit_resample(X_train,y_train)
from sklearn.ensemble import RandomForestClassifier
classifier=RandomForestClassifier()
classifier.fit(X_train_ns,y_train_ns)
y_pred=classifier.predict(X_test)
print(confusion_matrix(y_test,y_pred))
print(accuracy_score(y_test,y_pred))
print(classification_report(y_test,y_pred))
from imblearn.combine import SMOTETomek
os=SMOTETomek(0.75)
X_train_ns,y_train_ns=os.fit_resample(X_train,y_train)
from sklearn.ensemble import RandomForestClassifier
classifier=RandomForestClassifier()
classifier.fit(X_train_ns,y_train_ns)
y_pred=classifier.predict(X_test)
print(confusion_matrix(y_test,y_pred))
print(accuracy_score(y_test,y_pred))
print(classification_report(y_test,y_pred))
from imblearn.ensemble import EasyEnsembleClassifier
easy=EasyEnsembleClassifier()
easy.fit(X_train,y_train)
y_pred=easy.predict(X_test)
print(confusion_matrix(y_test,y_pred))
print(accuracy_score(y_test,y_pred))
print(classification_report(y_test,y_pred))
X_train
from sklearn.ensemble import RandomForestClassifier
classifier=RandomForestClassifier()
classifier.fit(X,y)
y_pred=classifier.predict(X)
print(confusion_matrix(y,y_pred))
print(accuracy_score(y,y_pred))
print(classification_report(y,y_pred))
###Output
[[425050 0]
[ 14 4251]]
0.9999673899118363
precision recall f1-score support
0 1.00 1.00 1.00 425050
1 1.00 1.00 1.00 4265
accuracy 1.00 429315
macro avg 1.00 1.00 1.00 429315
weighted avg 1.00 1.00 1.00 429315
###Markdown
Machine Learning
###Code
from sklearn.model_selection import GridSearchCV
X=df_new.drop(columns=['order_date','cancelled'],axis=1)
y=df_new['cancelled']
from sklearn import tree
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X, y)
k=clf.predict(X)
from sklearn.metrics import accuracy_score
accuracy_score(k,y)*100
from sklearn.metrics import roc_auc_score
roc_auc_score(y,k)
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
pipe = Pipeline(steps=[ ('dec_tree',clf)])
n_components = list(range(1,X.shape[1]+1,1))
criterion = ['gini', 'entropy']
max_depth = [2,4,6,8,10,12]
parameters = dict(dec_tree__criterion=criterion,dec_tree__max_depth=max_depth)
clf_GS = GridSearchCV(pipe, parameters)
clf_GS.fit(X, y)
print('Best Criterion:', clf_GS.best_estimator_.get_params()['dec_tree__criterion'])
print('Best max_depth:', clf_GS.best_estimator_.get_params()['dec_tree__max_depth'])
print();
print(clf_GS.best_estimator_.get_params()['dec_tree'])
clf = tree.DecisionTreeClassifier(criterion="gini",max_depth=2)
clf = clf.fit(X, y)
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=1000, n_features=9,
n_informative=2, n_redundant=0,
random_state=0, shuffle=False)
clf = RandomForestClassifier(max_depth=2, random_state=0)
clf.fit(X, y)
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
from sklearn.model_selection import GridSearchCV
k_range = list(range(1, 31))
param_grid = dict(n_neighbors=k_range)
# defining parameter range
grid = GridSearchCV(knn, param_grid, cv=10, scoring='roc_auc', return_train_score=False,verbose=1)
# fitting the model for grid search
grid_search=grid.fit(X, y)
print(grid.best_score_)
print(grid.best_params_)
print(grid.best_estimator_)
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X,y)
k=knn.predict(X)
from sklearn.metrics import accuracy_score
accuracy_score(k,y)*100
from sklearn.metrics import roc_auc_score
roc_auc_score(y,k)
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(knn, X, y)
plt.show()
from xgboost import XGBClassifier
model = XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=1, enable_categorical=False,
gamma=0, gpu_id=-1, importance_type=None,
interaction_constraints='', learning_rate=0.300000012,
max_delta_step=0, max_depth=6, min_child_weight=1,
monotone_constraints='()', n_estimators=100, n_jobs=4,
num_parallel_tree=1, predictor='auto', random_state=0,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, subsample=1,
tree_method='exact', validate_parameters=1, verbosity=None)
model.fit(X, y)
###Output
/Users/abhaylal/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:1224: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
###Markdown
XGboost
###Code
X=df_new.drop(columns=['order_date','cancelled'],axis=1)
y=df_new['cancelled']
## Hyper Parameter Optimization
params={
"learning_rate" : [0.05, 0.10, 0.15, 0.20, 0.25, 0.30 ] ,
"max_depth" : [ 3, 4, 5, 6, 8, 10, 12, 15],
"min_child_weight" : [ 1, 3, 5, 7 ],
"gamma" : [ 0.0, 0.1, 0.2 , 0.3, 0.4 ],
"colsample_bytree" : [ 0.3, 0.4, 0.5 , 0.7 ]
}
## Hyperparameter optimization using RandomizedSearchCV
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
import xgboost
classifier=xgboost.XGBClassifier()
random_search=RandomizedSearchCV(classifier,param_distributions=params,n_iter=5,scoring='roc_auc',n_jobs=-1,cv=5,verbose=3)
random_search.fit(X,y)
random_search.best_estimator_
classifier=xgboost.XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=0.4,
enable_categorical=False, gamma=0.1, gpu_id=-1,
importance_type=None, interaction_constraints='',
learning_rate=0.05, max_delta_step=0, max_depth=4,
min_child_weight=3, missing=1, monotone_constraints='()',
n_estimators=100, n_jobs=4, num_parallel_tree=1, predictor='auto',
random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1,
subsample=1, tree_method='exact', validate_parameters=1,
verbosity=None)
X
classifier.fit(X,y)
from sklearn.model_selection import cross_val_score
score=cross_val_score(classifier,X,y)
score
###Output
_____no_output_____
###Markdown
Decission tree classifier
###Code
y.value_counts()/y.shape
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X)
X_scaled = scaler.transform(X)
#X=df_new.drop(columns=['order_date','cancelled'],axis=1)
#y=df_new['cancelled']
from sklearn.tree import DecisionTreeClassifier
from sklearn import metrics
dtc = DecisionTreeClassifier(min_samples_split=2, random_state=0)
# 2. Fit
dtc.fit(X_over,y_over)
# 3. Predict, there're 4 features in the iris dataset
y_pred_class = dtc.predict(X_over)
# Accuracy
metrics.accuracy_score(y_over, y_pred_class)
from sklearn.model_selection import GridSearchCV
# Define the parameter values that should be searched
sample_split_range = list(range(1,50))
# Create a parameter grid: map the parameter names to the values that should be searched
# Simply a python dictionary
# Key: parameter name
# Value: list of values that should be searched for that parameter
# Single key-value pair for param_grid
param_grid = dict(min_samples_split=sample_split_range)
# instantiate the grid
grid = GridSearchCV(dtc, param_grid, cv=10, scoring='roc_auc')
# fit the grid with data
grid.fit(X_over, y_over)
# examine the best model
# Single best score achieved across all params (min_samples_split)
print(grid.best_score_)
# Dictionary containing the parameters (min_samples_split) used to generate that score
print(grid.best_params_)
# Actual model object fit with those best parameters
# Shows default parameters that we did not specify
print(grid.best_estimator_)
#min_samples_split=49 --->> Area Under Curve: 0.5932188903229508
##min_samples_split=30 --->> Area Under Curve: 0.6236830781786517
## min_samples_split=15 --->>Area Under Curve: 0.688457802288759
## min_samples_split=5 ---->> Area Under Curve: 0.8430708724289109
## min_samples_split=2 ---->> Area Under Curve: 1.0
#weights = {0:1, 1:100}
dtc_new=DecisionTreeClassifier(min_samples_split=2,random_state=0)
dtc_new.fit(X, y)
y_pred=dtc_new.predict(X)
a=y.values.tolist()
from sklearn.metrics import accuracy_score, confusion_matrix,roc_curve, roc_auc_score, precision_score, recall_score, precision_recall_curve
from sklearn.metrics import f1_score
print(f'Accuracy Score: {accuracy_score(y,y_pred)}')
print(f'Confusion Matrix: \n{confusion_matrix(y, y_pred)}')
print(f'Area Under Curve: {roc_auc_score(y, y_pred)}')
print(f'Recall score: {recall_score(y,y_pred)}')
print(f'F1 score: {f1_score(y,y_pred)}')
X_over
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
X=df_new.drop(columns=['order_date','cancelled','order_id'],axis=1)
y=df_new['cancelled']
X
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_over,y_over)
model.score(X_over,y_over)
y_pred = model.predict(X_over)
from sklearn.metrics import accuracy_score, confusion_matrix,roc_curve, roc_auc_score, precision_score, recall_score, precision_recall_curve
from sklearn.metrics import f1_score
print(f'Accuracy Score: {accuracy_score(y_over,y_pred)}')
print(f'Confusion Matrix: \n{confusion_matrix(y_over, y_pred)}')
print(f'Area Under Curve: {roc_auc_score(y_over, y_pred)}')
print(f'Recall score: {recall_score(y_over,y_pred)}')
from sklearn.linear_model import Lasso
from sklearn.model_selection import GridSearchCV
lasso=Lasso()
parameters={'alpha':[1e-15,1e-10,1e-8,1e-3,1e-2,1,5,10,20,30,35,40,45,50,55,100]}
lasso_regressor=GridSearchCV(lasso,parameters,scoring='roc_auc',cv=5)
lasso_regressor.fit(X,y)
print(lasso_regressor.best_params_)
print(lasso_regressor.best_score_)
print(lasso_regressor.best_params_)
print(lasso_regressor.best_score_)
###Output
{'alpha': 1e-15}
0.6360980391400355
###Markdown
Test
###Code
#X=df_new.drop(columns=['order_date','cancelled'],axis=1)
#y=df_new['cancelled']
X
test=pd.read_csv('test.csv')
test
test.drop(columns=['reassignment_method','reassignment_reason','reassigned_order'],axis=1,inplace=True)
test.drop(columns=['order_time','allot_time','accept_time'],axis=1,inplace=True)
X_test=test.drop(columns=['order_date'],axis=1)
X_test
X_test.isnull().sum()
from sklearn.impute import SimpleImputer
imp = SimpleImputer(missing_values=np.nan, strategy='median')
imp = imp.fit(X_test[['alloted_orders','delivered_orders','undelivered_orders','lifetime_order_count','session_time']])
X_test[['alloted_orders','delivered_orders','undelivered_orders','lifetime_order_count','session_time']]=imp.transform(X_test[['alloted_orders','delivered_orders','undelivered_orders','lifetime_order_count','session_time']])
X_test
X_test.isnull().sum()
#X_test.drop(columns=['order_id'],inplace=True)
#X_test.drop(columns=['session_time','lifetime_order_count','alloted_orders','delivered_orders'],axis=1,inplace=True)
y_test=classifier.predict(X_test)
y_test
y_test.shape
final=pd.DataFrame()
final['order_id']=X_test['order_id']
final['cancelled']=classifier.predict(X_test)
final
final.to_csv('submit_1.csv', index=False)
X_test.isnull().sum()
final['cancelled'].value_counts()
final.shape
###Output
_____no_output_____
###Markdown
trial
###Code
sns.boxplot(test['session_time'])
from scipy import stats
import numpy as np
z = np.abs(stats.zscore(X_test))
threshold = 3
a=X_test.iloc[np.where(z > 3)].index.values
a.shape
test
df_new.head()
df_new.skew(axis=0).sort_values(ascending=False)
###Output
/var/folders/8w/s79vcmt129n8trzthj89p7hm0000gn/T/ipykernel_3066/3343374817.py:1: FutureWarning: Dropping of nuisance columns in DataFrame reductions (with 'numeric_only=None') is deprecated; in a future version this will raise TypeError. Select only valid columns before calling the reduction.
df_new.skew(axis=0).sort_values(ascending=False)
|
deep_writing-tpu.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import regex as re
import nltk
from nltk.draw.dispersion import dispersion_plot
from nltk.corpus import stopwords
nltk.download('stopwords')
from nltk.probability import FreqDist
from textblob import TextBlob
from nltk.stem import WordNetLemmatizer
nltk.download('wordnet')
from nltk.corpus.reader.plaintext import PlaintextCorpusReader
nltk.download('averaged_perceptron_tagger')
from nltk.util import ngrams
from nltk.sentiment import SentimentIntensityAnalyzer
nltk.download('vader_lexicon')
from sklearn.preprocessing import LabelBinarizer
import tensorflow as tf
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.utils import np_utils
import urllib
import os
# Any results you write to the current directory are saved as output.
# from google.colab import drive
# drive.mount('/content/gdrive')
# print("The current directory is: ", os.getcwd())
# import os
# os.chdir("/content/gdrive/My Drive/Galvanize Adm/Marcel Proust")
# print("The current directory is: ", os.getcwd())
txt = urllib.request.urlopen('http://www.textfiles.com/stories/3gables.txt').read().decode('utf8')
txt = txt.replace('\n', ' ')[322:]
txt[0:400]
# Let's do some cleaning
words = txt.split(' ')
words_s = [word.lower() for word in words if word.isalpha()]
len(words_s)
# |# creating characters, words and lists
# characters = sorted(list(set(txt)))
# words = txt.split(' ')
# sentences = txt.split('.')
# # let's remove stop words from words
# stop_words = set(stopwords.words('english'))
# words_s = [i for i in words if not i in stop_words]
# words_s = [word.lower() for word in words_s if word.isalpha()]
# fdist = FreqDist(words_s) # checking most frequent words in whole document
# plt.figure(figsize= (12,2))
# plt.bar(pd.DataFrame(fdist.most_common()[0:20])[0], pd.DataFrame(fdist.most_common()[0:20])[1])
# plt.xticks(rotation=90)
# plt.show()
# # lemmatizing
# lemmatizer = WordNetLemmatizer()
# words_s_l = pd.Series(words_s).apply(lambda x: lemmatizer.lemmatize(x))
# words_s_l_p = pd.DataFrame(nltk.pos_tag(words_s_l), columns = ['words', 'pos'])
sentences = ' '.join(words_s)
# Let's look at overall sentiment of the book
scores = SentimentIntensityAnalyzer().polarity_scores(sentences)
print(scores)
# saving sentences as bigrams
def n_gram(x):
tokens = [token for token in x.split(" ") if token != ""]
output = list(ngrams(tokens, 3))
return output
df = pd.DataFrame(pd.Series(sentences).apply(lambda x: n_gram(x))[0])
# make a dataframe with 2 words as independent and just next third word as target column
df = pd.concat([df[0] + ' ' + df[1], df[2]], axis = 1)
df.head()
df.shape
from sklearn.feature_extraction.text import TfidfVectorizer
tfid = TfidfVectorizer()
X = tfid.fit_transform(df[0]).toarray()
X.shape
encoder = LabelBinarizer()
y = encoder.fit_transform(df[2])
y.shape
# LSTMs accept input in the form of (number_of_sequences, length_of_sequence, number_of_features)
X = np.reshape(X, (X.shape[0], X.shape[1], 1))
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.LSTM(400, input_shape=(X.shape[1], X.shape[2]), return_sequences = True))
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.LSTM(400, return_sequences = True))
model.add(tf.keras.layers.Dropout(0.2))
# model.add(tf.keras.layers.LSTM(500, return_sequences = True))
# model.add(tf.keras.layers.Dropout(0.2))
# model.add(tf.keras.layers.LSTM(500, return_sequences = True))
# model.add(tf.keras.layers.Dropout(0.2))
# model.add(tf.keras.layers.LSTM(2500, return_sequences = True))
# model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.LSTM(200))
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.Dense(y.shape[1], activation='softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
tf.logging.set_verbosity(tf.logging.INFO)
tpu_model = tf.contrib.tpu.keras_to_tpu_model(
model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER)))
tpu_model.summary()
tpu_model.fit(X, y, epochs = 40, batch_size = 100)
cpu_model = tpu_model.sync_to_cpu()
y_pred = cpu_model.predict(X[99:150])
(encoder.inverse_transform(y_pred)).tolist()
%%time
# Evaluate the model on valid set
score = cpu_model.evaluate(X, y, verbose = 0)
# Print test accuracy
print('\n', 'Valid accuracy:', score[1])
(encoder.inverse_transform(cpu_model.predict(X[1200:1201])))
###Output
_____no_output_____ |
prac10/e4896_beat_sync_chroma.ipynb | ###Markdown
Calculating Beat-Synchronous Chroma FeaturesDan Ellis [email protected] 2016-04-04
###Code
%pylab inline
from __future__ import print_function
import cPickle as pickle
import os
import time
import IPython
import numpy as np
import scipy
import sklearn.mixture
import librosa
import mir_eval
# Load in a single track from the 32kbps 16 kHz SR mono collection.
DATA_DIR = '/Users/dpwe/Downloads/prac10/data'
file_id = 'beatles/Let_It_Be/06-Let_It_Be'
y, sr = librosa.load(os.path.join(DATA_DIR, 'mp3s-32k', file_id + '.mp3'), sr=None)
print("sr=", sr, "duration=", y.shape[0]/float(sr))
# Beat tracking.
hop_length = 128 # 8 ms at 16 kHz
tempo, beats = librosa.beat.beat_track(y=y, sr=sr, hop_length=hop_length, start_bpm=240)
print("tempo (BPM)=", tempo, "beat.shape=", beats.shape)
beat_times = beats * hop_length / float(sr)
print(beat_times[:5])
# Difference of successive beat times shows varying beat duration.
plot(np.diff(beat_times))
def my_imshow(data, **kwargs):
"""Wrapper for imshow that sets common defaults."""
plt.imshow(data, interpolation='nearest', aspect='auto', origin='bottom', cmap='gray_r', **kwargs)
# CQT-based chromagram and beat-level aggregation.
frame_chroma = librosa.feature.chroma_cqt(y=y, sr=sr, hop_length=hop_length)
print("frame_chroma.shape:", frame_chroma.shape)
beat_chroma = librosa.feature.sync(frame_chroma, beats).transpose()
print("beat_chroma.shape:", beat_chroma.shape)
plt.subplot(211)
my_imshow(frame_chroma[:, :12000])
plt.subplot(212)
my_imshow(beat_chroma[:20].transpose())
# Code to convert the Isophonics label files into beat-references we need.
def read_iso_label_file(filename):
"""Read in an isophonics-format chord label file."""
times = []
labels = []
with open(filename, 'r') as f:
for line in f:
fields = line.strip().split(' ')
start_secs = float(fields[0])
end_secs = float(fields[1])
times.append((start_secs, end_secs))
labels.append(fields[2])
return np.array(times), labels
def calculate_overlap_durations(ranges_a, ranges_b):
"""Calculate the duration of overlaps between all pairs of (start, end) intervals."""
max_starts_matrix = np.maximum.outer(ranges_a[:, 0], ranges_b[:, 0])
min_ends_matrix = np.minimum.outer(ranges_a[:, 1], ranges_b[:, 1])
overlap_durations = np.maximum(0, min_ends_matrix - max_starts_matrix)
return overlap_durations
def sample_label_sequence(sample_ranges, label_ranges, labels):
"""Find the most-overlapping label for a list of (start, end) intervals."""
overlaps = calculate_overlap_durations(sample_ranges, label_ranges)
best_label = np.argmax(overlaps, axis=1)
return [labels[i] for i in best_label]
def chord_name_to_index(labels):
"""Convert chord name strings into model indices (0..25)."""
indices = np.zeros(len(labels), dtype=int)
root_degrees = {'C': 0, 'D': 2, 'E': 4, 'F':5, 'G': 7, 'A':9, 'B': 11}
for label_index, label in enumerate(labels):
if label == 'N' or label == 'X':
# Leave at zero.
continue
root_degree = root_degrees[label[0].upper()]
minor = False
if len(label) > 1:
if label[1] == '#':
root_degree = (root_degree + 1) % 12
if label[1] == 'b':
root_degree = (root_degree - 1) % 12
if ':' in label:
modifier = label[label.index(':') + 1:]
if modifier[:3] == 'min':
minor = True
indices[label_index] = 1 + root_degree + 12 * minor
return indices
beat_ranges = np.hstack([beat_times[:, np.newaxis],
np.hstack([beat_times[1:],
2 * beat_times[-1] - beat_times[-2]])[:, np.newaxis]])
label_ranges, labels = read_iso_label_file(os.path.join(DATA_DIR, 'isolabels', file_id + '.txt'))
print(chord_name_to_index(sample_label_sequence(beat_ranges, label_ranges, labels)[:32]))
def calculate_chroma_and_labels_of_id(file_id):
"""Read the audio, calculate beat-sync chroma, sample the label file."""
y, sr = librosa.load(os.path.join(DATA_DIR, 'mp3s-32k', file_id + '.mp3'), sr=None)
hop_length = 128 # 8 ms at 16 kHz
tempo, beat_frames = librosa.beat.beat_track(y=y, sr=sr, hop_length=hop_length, start_bpm=240)
# Append a final beat time one beat beyond the end.
extended_beat_frames = np.hstack([beat_frames, 2*beat_frames[-1] - beat_frames[-2]])
frame_chroma = librosa.feature.chroma_cqt(y=y, sr=sr, hop_length=hop_length)
# Drop the first beat_chroma which is stuff before the first beat,
# and the final beat_chroma which is everything after the last beat time.
beat_chroma = librosa.feature.sync(frame_chroma, extended_beat_frames).transpose()
# Drop first row if the beat_frames start after the beginning.
if beat_frames[0] > 0:
beat_chroma = beat_chroma[1:]
# Keep only as many frames as beat times.
beat_chroma = beat_chroma[:len(beat_frames)]
assert beat_chroma.shape[0] == beat_frames.shape[0]
# MP3s encoded with lame have a 68 ms delay
LAME_DELAY_SECONDS = 0.068
frame_rate = sr / float(hop_length)
extended_beat_times = extended_beat_frames / frame_rate - LAME_DELAY_SECONDS
beat_times = extended_beat_times[:-1]
beat_ranges = np.hstack([extended_beat_times[:-1, np.newaxis],
extended_beat_times[1:, np.newaxis]])
label_time_ranges, labels = read_iso_label_file(os.path.join(
DATA_DIR, 'isolabels', file_id + '.txt'))
beat_labels = sample_label_sequence(beat_ranges, label_time_ranges, labels)
label_indices = chord_name_to_index(beat_labels)
return beat_times, beat_chroma, label_indices
beat_times, beat_chroma, label_indices = calculate_chroma_and_labels_of_id(file_id)
print(beat_chroma.shape, beat_times.shape, label_indices.shape)
plt.subplot(211)
my_imshow(beat_chroma[:50].transpose())
plt.subplot(212)
plt.plot(label_indices[:50], '.')
# Read and write per-track data files in native Python serialized format.
def write_beat_chroma_labels(filename, beat_times, chroma_features, label_indices):
"""Write out the computed beat-synchronous chroma data."""
# Create the enclosing directory if needed.
directory = os.path.dirname(filename)
if directory and not os.path.exists(directory):
os.makedirs(directory)
with open(filename, "wb") as f:
pickle.dump((beat_times, chroma_features, label_indices), f, pickle.HIGHEST_PROTOCOL)
def read_beat_chroma_labels(filename):
"""Read back a precomputed beat-synchronous chroma record."""
with open(filename, "rb") as f:
beat_times, chroma_features, label_indices = pickle.load(f)
return beat_times, chroma_features, label_indices
beatchromlab_filename = os.path.join(DATA_DIR, 'beatchromlabs', file_id + '.pkl')
write_beat_chroma_labels(beatchromlab_filename, beat_chroma, beat_times, label_indices)
cc, tt, ll = read_beat_chroma_labels(beatchromlab_filename)
print(cc.shape, tt.shape, ll.shape)
# Read in the list of training file IDs.
def read_file_list(filename):
"""Read a text file with one item per line."""
items = []
with open(filename, 'r') as f:
for line in f:
items.append(line.strip())
return items
train_list_filename = os.path.join(DATA_DIR, 'trainfilelist.txt')
train_files = read_file_list(train_list_filename)
test_list_filename = os.path.join(DATA_DIR, 'testfilelist.txt')
test_files = read_file_list(test_list_filename)
all_ids = train_files
all_ids.extend(test_files)
print("# ids:", len(all_ids))
for number, file_id in enumerate(all_ids):
print(time.ctime(), "File {:d} of {:d}: {:s}".format(number, len(all_ids), file_id))
beat_times, beat_chroma, label_indices = calculate_chroma_and_labels_of_id(file_id)
beatchromlab_filename = os.path.join(DATA_DIR, 'beatchromlabs', file_id + '.pkl')
write_beat_chroma_labels(beatchromlab_filename, beat_times, beat_chroma, label_indices)
###Output
Tue Apr 5 22:56:00 2016 File 0 of 180: beatles/With_The_Beatles/01-It_Won_t_Be_Long
Tue Apr 5 22:56:19 2016 File 1 of 180: beatles/With_The_Beatles/02-All_I_ve_Got_To_Do
Tue Apr 5 22:56:34 2016 File 2 of 180: beatles/With_The_Beatles/03-All_My_Loving
Tue Apr 5 22:56:43 2016 File 3 of 180: beatles/With_The_Beatles/04-Don_t_Bother_Me
Tue Apr 5 22:57:05 2016 File 4 of 180: beatles/With_The_Beatles/05-Little_Child
Tue Apr 5 22:57:11 2016 File 5 of 180: beatles/With_The_Beatles/06-Till_There_Was_You
Tue Apr 5 22:57:23 2016 File 6 of 180: beatles/With_The_Beatles/07-Please_Mister_Postman
Tue Apr 5 22:57:47 2016 File 7 of 180: beatles/With_The_Beatles/08-Roll_Over_Beethoven
Tue Apr 5 22:57:55 2016 File 8 of 180: beatles/With_The_Beatles/09-Hold_Me_Tight
Tue Apr 5 22:58:01 2016 File 9 of 180: beatles/With_The_Beatles/10-You_Really_Got_A_Hold_On_Me
Tue Apr 5 22:58:09 2016 File 10 of 180: beatles/With_The_Beatles/11-I_Wanna_Be_Your_Man
Tue Apr 5 22:58:17 2016 File 11 of 180: beatles/With_The_Beatles/12-Devil_In_Her_Heart
Tue Apr 5 22:58:26 2016 File 12 of 180: beatles/With_The_Beatles/13-Not_A_Second_Time
Tue Apr 5 22:58:32 2016 File 13 of 180: beatles/With_The_Beatles/14-Money_That_s_What_I_Want_
Tue Apr 5 22:58:39 2016 File 14 of 180: beatles/Rubber_Soul/01-Drive_My_Car
Tue Apr 5 22:58:52 2016 File 15 of 180: beatles/Rubber_Soul/02-Norwegian_Wood_This_Bird_Has_Flown_
Tue Apr 5 22:58:58 2016 File 16 of 180: beatles/Rubber_Soul/03-You_Won_t_See_Me
Tue Apr 5 22:59:15 2016 File 17 of 180: beatles/Rubber_Soul/04-Nowhere_Man
Tue Apr 5 22:59:23 2016 File 18 of 180: beatles/Rubber_Soul/05-Think_For_Yourself
Tue Apr 5 22:59:29 2016 File 19 of 180: beatles/Rubber_Soul/06-The_Word
Tue Apr 5 22:59:36 2016 File 20 of 180: beatles/Rubber_Soul/07-Michelle
Tue Apr 5 22:59:41 2016 File 21 of 180: beatles/Rubber_Soul/08-What_Goes_On
Tue Apr 5 22:59:53 2016 File 22 of 180: beatles/Rubber_Soul/09-Girl
Tue Apr 5 22:59:59 2016 File 23 of 180: beatles/Rubber_Soul/10-I_m_Looking_Through_You
Tue Apr 5 23:00:06 2016 File 24 of 180: beatles/Rubber_Soul/11-In_My_Life
Tue Apr 5 23:00:11 2016 File 25 of 180: beatles/Rubber_Soul/12-Wait
Tue Apr 5 23:00:17 2016 File 26 of 180: beatles/Rubber_Soul/13-If_I_Needed_Someone
Tue Apr 5 23:00:23 2016 File 27 of 180: beatles/Rubber_Soul/14-Run_For_Your_Life
Tue Apr 5 23:00:30 2016 File 28 of 180: beatles/The_White_Album_Disc_1/01-Back_In_The_U_S_S_R_
Tue Apr 5 23:00:36 2016 File 29 of 180: beatles/The_White_Album_Disc_1/02-Dear_Prudence
Tue Apr 5 23:00:45 2016 File 30 of 180: beatles/The_White_Album_Disc_1/03-Glass_Onion
Tue Apr 5 23:00:51 2016 File 31 of 180: beatles/The_White_Album_Disc_1/04-Ob-La-Di_Ob-La-Da
Tue Apr 5 23:00:58 2016 File 32 of 180: beatles/The_White_Album_Disc_1/05-Wild_Honey_Pie
Tue Apr 5 23:01:00 2016 File 33 of 180: beatles/The_White_Album_Disc_1/06-The_Continuing_Story_Of_Bungalow_Bill
Tue Apr 5 23:01:08 2016 File 34 of 180: beatles/The_White_Album_Disc_1/07-While_My_Guitar_Gently_Weeps
Tue Apr 5 23:01:25 2016 File 35 of 180: beatles/The_White_Album_Disc_1/08-Happiness_Is_A_Warm_Gun
Tue Apr 5 23:01:32 2016 File 36 of 180: beatles/The_White_Album_Disc_1/09-Martha_My_Dear
Tue Apr 5 23:01:41 2016 File 37 of 180: beatles/The_White_Album_Disc_1/10-I_m_So_Tired
Tue Apr 5 23:01:46 2016 File 38 of 180: beatles/The_White_Album_Disc_1/11-Blackbird
Tue Apr 5 23:01:54 2016 File 39 of 180: beatles/The_White_Album_Disc_1/12-Piggies
Tue Apr 5 23:02:10 2016 File 40 of 180: beatles/The_White_Album_Disc_1/13-Rocky_Raccoon
Tue Apr 5 23:02:18 2016 File 41 of 180: beatles/The_White_Album_Disc_1/14-Don_t_Pass_Me_By
Tue Apr 5 23:02:36 2016 File 42 of 180: beatles/The_White_Album_Disc_1/15-Why_Don_t_We_Do_It_In_The_Road
Tue Apr 5 23:02:41 2016 File 43 of 180: beatles/The_White_Album_Disc_1/16-I_Will
Tue Apr 5 23:02:48 2016 File 44 of 180: beatles/The_White_Album_Disc_1/17-Julia
Tue Apr 5 23:03:01 2016 File 45 of 180: beatles/The_White_Album_Disc_2/01-Birthday
Tue Apr 5 23:03:10 2016 File 46 of 180: beatles/The_White_Album_Disc_2/02-Yer_Blues
Tue Apr 5 23:04:02 2016 File 47 of 180: beatles/The_White_Album_Disc_2/03-Mother_Nature_s_Son
Tue Apr 5 23:04:09 2016 File 48 of 180: beatles/The_White_Album_Disc_2/04-Everybody_s_Got_Something_To_Hide_Except_Me_And_My_Monkey
Tue Apr 5 23:04:15 2016 File 49 of 180: beatles/The_White_Album_Disc_2/05-Sexy_Sadie
Tue Apr 5 23:04:24 2016 File 50 of 180: beatles/The_White_Album_Disc_2/06-Helter_Skelter
Tue Apr 5 23:04:44 2016 File 51 of 180: beatles/The_White_Album_Disc_2/07-Long_Long_Long
Tue Apr 5 23:04:52 2016 File 52 of 180: beatles/The_White_Album_Disc_2/08-Revolution_1
Tue Apr 5 23:05:03 2016 File 53 of 180: beatles/The_White_Album_Disc_2/09-Honey_Pie
Tue Apr 5 23:05:17 2016 File 54 of 180: beatles/The_White_Album_Disc_2/10-Savoy_Truffle
Tue Apr 5 23:05:24 2016 File 55 of 180: beatles/The_White_Album_Disc_2/11-Cry_Baby_Cry
Tue Apr 5 23:05:31 2016 File 56 of 180: beatles/The_White_Album_Disc_2/12-Revolution_9
Tue Apr 5 23:07:19 2016 File 57 of 180: beatles/The_White_Album_Disc_2/13-Good_Night
Tue Apr 5 23:07:26 2016 File 58 of 180: beatles/A_Hard_Day_s_Night/01-A_Hard_Day_s_Night
Tue Apr 5 23:07:48 2016 File 59 of 180: beatles/A_Hard_Day_s_Night/02-I_Should_Have_Known_Better
Tue Apr 5 23:08:03 2016 File 60 of 180: beatles/A_Hard_Day_s_Night/03-If_I_Fell
Tue Apr 5 23:08:10 2016 File 61 of 180: beatles/A_Hard_Day_s_Night/04-I_m_Happy_Just_To_Dance_With_You
Tue Apr 5 23:08:24 2016 File 62 of 180: beatles/A_Hard_Day_s_Night/05-And_I_Love_Her
Tue Apr 5 23:08:46 2016 File 63 of 180: beatles/A_Hard_Day_s_Night/06-Tell_Me_Why
Tue Apr 5 23:09:03 2016 File 64 of 180: beatles/A_Hard_Day_s_Night/07-Can_t_Buy_Me_Love
Tue Apr 5 23:09:08 2016 File 65 of 180: beatles/A_Hard_Day_s_Night/08-Any_Time_At_All
Tue Apr 5 23:09:26 2016 File 66 of 180: beatles/A_Hard_Day_s_Night/09-I_ll_Cry_Instead
Tue Apr 5 23:09:30 2016 File 67 of 180: beatles/A_Hard_Day_s_Night/10-Things_We_Said_Today
Tue Apr 5 23:09:36 2016 File 68 of 180: beatles/A_Hard_Day_s_Night/11-When_I_Get_Home
Tue Apr 5 23:09:55 2016 File 69 of 180: beatles/A_Hard_Day_s_Night/12-You_Can_t_Do_That
Tue Apr 5 23:10:02 2016 File 70 of 180: beatles/A_Hard_Day_s_Night/13-I_ll_Be_Back
Tue Apr 5 23:10:08 2016 File 71 of 180: beatles/Revolver/01-Taxman
Tue Apr 5 23:10:13 2016 File 72 of 180: beatles/Revolver/02-Eleanor_Rigby
Tue Apr 5 23:10:18 2016 File 73 of 180: beatles/Revolver/03-I_m_Only_Sleeping
Tue Apr 5 23:10:24 2016 File 74 of 180: beatles/Revolver/04-Love_You_To
Tue Apr 5 23:10:31 2016 File 75 of 180: beatles/Revolver/05-Here_There_And_Everywhere
Tue Apr 5 23:10:39 2016 File 76 of 180: beatles/Revolver/06-Yellow_Submarine
Tue Apr 5 23:10:46 2016 File 77 of 180: beatles/Revolver/07-She_Said_She_Said
Tue Apr 5 23:10:53 2016 File 78 of 180: beatles/Revolver/08-Good_Day_Sunshine
Tue Apr 5 23:10:58 2016 File 79 of 180: beatles/Revolver/09-And_Your_Bird_Can_Sing
Tue Apr 5 23:11:08 2016 File 80 of 180: beatles/Revolver/10-For_No_One
Tue Apr 5 23:11:15 2016 File 81 of 180: beatles/Revolver/11-Doctor_Robert
Tue Apr 5 23:11:21 2016 File 82 of 180: beatles/Revolver/12-I_Want_To_Tell_You
Tue Apr 5 23:11:29 2016 File 83 of 180: beatles/Revolver/13-Got_To_Get_You_Into_My_Life
Tue Apr 5 23:11:36 2016 File 84 of 180: beatles/Revolver/14-Tomorrow_Never_Knows
Tue Apr 5 23:11:43 2016 File 85 of 180: beatles/Abbey_Road/01-Come_Together
Tue Apr 5 23:12:04 2016 File 86 of 180: beatles/Abbey_Road/02-Something
Tue Apr 5 23:12:36 2016 File 87 of 180: beatles/Abbey_Road/03-Maxwell_s_Silver_Hammer
Tue Apr 5 23:12:47 2016 File 88 of 180: beatles/Abbey_Road/04-Oh_Darling
Tue Apr 5 23:12:56 2016 File 89 of 180: beatles/Abbey_Road/05-Octopus_s_Garden
Tue Apr 5 23:13:23 2016 File 90 of 180: beatles/Abbey_Road/06-I_Want_You_She_s_So_Heavy_
Tue Apr 5 23:14:31 2016 File 91 of 180: beatles/Abbey_Road/07-Here_Comes_The_Sun
Tue Apr 5 23:14:49 2016 File 92 of 180: beatles/Abbey_Road/08-Because
Tue Apr 5 23:14:55 2016 File 93 of 180: beatles/Abbey_Road/09-You_Never_Give_Me_Your_Money
Tue Apr 5 23:15:46 2016 File 94 of 180: beatles/Abbey_Road/10-Sun_King
Tue Apr 5 23:15:51 2016 File 95 of 180: beatles/Abbey_Road/11-Mean_Mr_Mustard
Tue Apr 5 23:15:53 2016 File 96 of 180: beatles/Abbey_Road/12-Polythene_Pam
Tue Apr 5 23:15:56 2016 File 97 of 180: beatles/Abbey_Road/13-She_Came_In_Through_The_Bathroom_Window
Tue Apr 5 23:16:05 2016 File 98 of 180: beatles/Abbey_Road/14-Golden_Slumbers
Tue Apr 5 23:16:09 2016 File 99 of 180: beatles/Abbey_Road/15-Carry_That_Weight
Tue Apr 5 23:16:12 2016 File 100 of 180: beatles/Abbey_Road/16-The_End
Tue Apr 5 23:16:17 2016 File 101 of 180: beatles/Abbey_Road/17-Her_Majesty
Tue Apr 5 23:16:18 2016 File 102 of 180: beatles/Beatles_For_Sale/01-No_Reply
Tue Apr 5 23:16:36 2016 File 103 of 180: beatles/Beatles_For_Sale/02-I_m_A_Loser
Tue Apr 5 23:16:43 2016 File 104 of 180: beatles/Beatles_For_Sale/03-Baby_s_In_Black
Tue Apr 5 23:16:59 2016 File 105 of 180: beatles/Beatles_For_Sale/04-Rock_And_Roll_Music
Tue Apr 5 23:17:05 2016 File 106 of 180: beatles/Beatles_For_Sale/05-I_ll_Follow_The_Sun
Tue Apr 5 23:17:10 2016 File 107 of 180: beatles/Beatles_For_Sale/06-Mr_Moonlight
Tue Apr 5 23:17:16 2016 File 108 of 180: beatles/Beatles_For_Sale/07-Medley_Kansas_City_Hey_Hey
Tue Apr 5 23:17:23 2016 File 109 of 180: beatles/Beatles_For_Sale/08-Eight_Days_A_Week
Tue Apr 5 23:17:29 2016 File 110 of 180: beatles/Beatles_For_Sale/09-Words_Of_Love
Tue Apr 5 23:17:41 2016 File 111 of 180: beatles/Beatles_For_Sale/10-Honey_Don_t
Tue Apr 5 23:18:10 2016 File 112 of 180: beatles/Beatles_For_Sale/11-Every_Little_Thing
Tue Apr 5 23:18:26 2016 File 113 of 180: beatles/Beatles_For_Sale/12-I_Don_t_Want_To_Spoil_The_Party
Tue Apr 5 23:18:32 2016 File 114 of 180: beatles/Beatles_For_Sale/13-What_You_re_Doing
Tue Apr 5 23:18:38 2016 File 115 of 180: beatles/Beatles_For_Sale/14-Everybody_s_Trying_To_Be_My_Baby
Tue Apr 5 23:18:46 2016 File 116 of 180: beatles/Sgt_Pepper_s_Lonely_Hearts_Club_Band/01-Sgt_Pepper_s_Lonely_Hearts_Club_Band
Tue Apr 5 23:18:51 2016 File 117 of 180: beatles/Sgt_Pepper_s_Lonely_Hearts_Club_Band/02-With_A_Little_Help_From_My_Friends
Tue Apr 5 23:18:57 2016 File 118 of 180: beatles/Sgt_Pepper_s_Lonely_Hearts_Club_Band/03-Lucy_In_The_Sky_With_Diamonds
Tue Apr 5 23:19:20 2016 File 119 of 180: beatles/Sgt_Pepper_s_Lonely_Hearts_Club_Band/04-Getting_Better
Tue Apr 5 23:19:28 2016 File 120 of 180: beatles/Sgt_Pepper_s_Lonely_Hearts_Club_Band/05-Fixing_A_Hole
Tue Apr 5 23:19:35 2016 File 121 of 180: beatles/Sgt_Pepper_s_Lonely_Hearts_Club_Band/06-She_s_Leaving_Home
Tue Apr 5 23:19:44 2016 File 122 of 180: beatles/Sgt_Pepper_s_Lonely_Hearts_Club_Band/07-Being_For_The_Benefit_Of_Mr_Kite
Tue Apr 5 23:19:51 2016 File 123 of 180: beatles/Sgt_Pepper_s_Lonely_Hearts_Club_Band/08-Within_You_Without_You
Tue Apr 5 23:20:03 2016 File 124 of 180: beatles/Sgt_Pepper_s_Lonely_Hearts_Club_Band/09-When_I_m_Sixty-Four
Tue Apr 5 23:20:12 2016 File 125 of 180: beatles/Sgt_Pepper_s_Lonely_Hearts_Club_Band/10-Lovely_Rita
Tue Apr 5 23:20:36 2016 File 126 of 180: beatles/Sgt_Pepper_s_Lonely_Hearts_Club_Band/11-Good_Morning_Good_Morning
Tue Apr 5 23:20:42 2016 File 127 of 180: beatles/Sgt_Pepper_s_Lonely_Hearts_Club_Band/12-Sgt_Pepper_s_Lonely_Hearts_Club_Band_Reprise_
Tue Apr 5 23:20:45 2016 File 128 of 180: beatles/Sgt_Pepper_s_Lonely_Hearts_Club_Band/13-A_Day_In_The_Life
Tue Apr 5 23:21:03 2016 File 129 of 180: beatles/Let_It_Be/01-Two_Of_Us
Tue Apr 5 23:21:27 2016 File 130 of 180: beatles/Let_It_Be/02-Dig_A_Pony
Tue Apr 5 23:21:36 2016 File 131 of 180: beatles/Let_It_Be/03-Across_The_Universe
Tue Apr 5 23:21:47 2016 File 132 of 180: beatles/Let_It_Be/04-I_Me_Mine
Tue Apr 5 23:21:56 2016 File 133 of 180: beatles/Let_It_Be/05-Dig_It
Tue Apr 5 23:21:57 2016 File 134 of 180: beatles/Let_It_Be/06-Let_It_Be
Tue Apr 5 23:22:05 2016 File 135 of 180: beatles/Let_It_Be/07-Maggie_Mae
Tue Apr 5 23:22:07 2016 File 136 of 180: beatles/Let_It_Be/08-I_ve_Got_A_Feeling
Tue Apr 5 23:22:16 2016 File 137 of 180: beatles/Let_It_Be/09-One_After_909
Tue Apr 5 23:22:23 2016 File 138 of 180: beatles/Let_It_Be/10-The_Long_And_Winding_Road
Tue Apr 5 23:22:32 2016 File 139 of 180: beatles/Let_It_Be/11-For_You_Blue
Tue Apr 5 23:22:38 2016 File 140 of 180: beatles/Let_It_Be/12-Get_Back
Tue Apr 5 23:22:47 2016 File 141 of 180: beatles/Please_Please_Me/01-I_Saw_Her_Standing_There
Tue Apr 5 23:22:55 2016 File 142 of 180: beatles/Please_Please_Me/02-Misery
Tue Apr 5 23:23:00 2016 File 143 of 180: beatles/Please_Please_Me/03-Anna_Go_To_Him_
Tue Apr 5 23:23:10 2016 File 144 of 180: beatles/Please_Please_Me/04-Chains
Tue Apr 5 23:23:31 2016 File 145 of 180: beatles/Please_Please_Me/05-Boys
Tue Apr 5 23:23:42 2016 File 146 of 180: beatles/Please_Please_Me/06-Ask_Me_Why
Tue Apr 5 23:23:48 2016 File 147 of 180: beatles/Please_Please_Me/07-Please_Please_Me
Tue Apr 5 23:23:53 2016 File 148 of 180: beatles/Please_Please_Me/08-Love_Me_Do
Tue Apr 5 23:23:58 2016 File 149 of 180: beatles/Please_Please_Me/09-P_S_I_Love_You
Tue Apr 5 23:24:04 2016 File 150 of 180: beatles/Please_Please_Me/10-Baby_It_s_You
Tue Apr 5 23:24:10 2016 File 151 of 180: beatles/Please_Please_Me/11-Do_You_Want_To_Know_A_Secret
Tue Apr 5 23:24:25 2016 File 152 of 180: beatles/Please_Please_Me/12-A_Taste_Of_Honey
Tue Apr 5 23:24:30 2016 File 153 of 180: beatles/Please_Please_Me/13-There_s_A_Place
Tue Apr 5 23:24:34 2016 File 154 of 180: beatles/Please_Please_Me/14-Twist_And_Shout
Tue Apr 5 23:24:48 2016 File 155 of 180: beatles/Help_/01-Help_
Tue Apr 5 23:25:07 2016 File 156 of 180: beatles/Help_/02-The_Night_Before
Tue Apr 5 23:25:31 2016 File 157 of 180: beatles/Help_/03-You_ve_Got_To_Hide_Your_Love_Away
Tue Apr 5 23:25:36 2016 File 158 of 180: beatles/Help_/04-I_Need_You
Tue Apr 5 23:25:41 2016 File 159 of 180: beatles/Help_/05-Another_Girl
Tue Apr 5 23:25:46 2016 File 160 of 180: beatles/Help_/06-You_re_Going_To_Lose_That_Girl
Tue Apr 5 23:25:52 2016 File 161 of 180: beatles/Help_/07-Ticket_To_Ride
Tue Apr 5 23:26:26 2016 File 162 of 180: beatles/Help_/08-Act_Naturally
Tue Apr 5 23:26:49 2016 File 163 of 180: beatles/Help_/09-It_s_Only_Love
Tue Apr 5 23:26:53 2016 File 164 of 180: beatles/Help_/10-You_Like_Me_Too_Much
Tue Apr 5 23:27:03 2016 File 165 of 180: beatles/Help_/11-Tell_Me_What_You_See
Tue Apr 5 23:27:09 2016 File 166 of 180: beatles/Help_/12-I_ve_Just_Seen_A_Face
Tue Apr 5 23:27:14 2016 File 167 of 180: beatles/Help_/13-Yesterday
Tue Apr 5 23:27:21 2016 File 168 of 180: beatles/Help_/14-Dizzy_Miss_Lizzy
Tue Apr 5 23:27:28 2016 File 169 of 180: beatles/Magical_Mystery_Tour/01-Magical_Mystery_Tour
Tue Apr 5 23:27:36 2016 File 170 of 180: beatles/Magical_Mystery_Tour/02-The_Fool_On_The_Hill
Tue Apr 5 23:28:06 2016 File 171 of 180: beatles/Magical_Mystery_Tour/03-Flying
Tue Apr 5 23:28:13 2016 File 172 of 180: beatles/Magical_Mystery_Tour/04-Blue_Jay_Way
Tue Apr 5 23:28:22 2016 File 173 of 180: beatles/Magical_Mystery_Tour/05-Your_Mother_Should_Know
Tue Apr 5 23:28:43 2016 File 174 of 180: beatles/Magical_Mystery_Tour/06-I_Am_The_Walrus
Tue Apr 5 23:28:53 2016 File 175 of 180: beatles/Magical_Mystery_Tour/07-Hello_Goodbye
Tue Apr 5 23:29:16 2016 File 176 of 180: beatles/Magical_Mystery_Tour/08-Strawberry_Fields_Forever
Tue Apr 5 23:29:27 2016 File 177 of 180: beatles/Magical_Mystery_Tour/09-Penny_Lane
Tue Apr 5 23:29:41 2016 File 178 of 180: beatles/Magical_Mystery_Tour/10-Baby_You_re_A_Rich_Man
Tue Apr 5 23:29:53 2016 File 179 of 180: beatles/Magical_Mystery_Tour/11-All_You_Need_Is_Love
|
Sentiment Analysis (Movie Review)/Amazon Product Review Using NLP.ipynb | ###Markdown
Amazon Review
###Code
#import the necessary libraries
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
import math
import warnings
###Output
_____no_output_____
###Markdown
You can download the data set here(https://github.com/amankharwal/Amazon-Sentiment-Analysis/blob/master/amazon.rar)
###Code
warnings.filterwarnings('ignore') #hides the warning
warnings.filterwarnings("ignore", category= DeprecationWarning)
warnings.filterwarnings("ignore", category= UserWarning)
sns.set_style("whitegrid") #the plotting style
np.random.seed(135)
df = pd.read_csv('amazon.csv')
df.head()
###Output
_____no_output_____
###Markdown
Describing the Dataset
###Code
data = df.copy()
data.describe()
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 34660 entries, 0 to 34659
Data columns (total 21 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 34660 non-null object
1 name 27900 non-null object
2 asins 34658 non-null object
3 brand 34660 non-null object
4 categories 34660 non-null object
5 keys 34660 non-null object
6 manufacturer 34660 non-null object
7 reviews.date 34621 non-null object
8 reviews.dateAdded 24039 non-null object
9 reviews.dateSeen 34660 non-null object
10 reviews.didPurchase 1 non-null object
11 reviews.doRecommend 34066 non-null object
12 reviews.id 1 non-null float64
13 reviews.numHelpful 34131 non-null float64
14 reviews.rating 34627 non-null float64
15 reviews.sourceURLs 34660 non-null object
16 reviews.text 34659 non-null object
17 reviews.title 34655 non-null object
18 reviews.userCity 0 non-null float64
19 reviews.userProvince 0 non-null float64
20 reviews.username 34658 non-null object
dtypes: float64(5), object(16)
memory usage: 5.6+ MB
###Markdown
We have to clean up the name column by defining unique products because we have around 7000 missing values
###Code
data["asins"].unique()
asins_unique = len(data["asins"].unique())
print("Number of Unique ASINs: "+ str(asins_unique))
###Output
Number of Unique ASINs: 42
###Markdown
Visualizing the distributions of Numerical Variables
###Code
data.hist(bins=50, figsize=(20,15))
plt.show()
###Output
_____no_output_____
###Markdown
Outlliers are important , we want to weight the reviews that were more helpful Split the data into Train and Test
###Code
## majority of reviews are 5 , so we have to make an evenly separated sets
from sklearn.model_selection import StratifiedShuffleSplit
print("Before {}".format(len(data)))
dataAfter = data.dropna(subset=["reviews.rating"])
## removes all the null values
print("After {}".format(len(dataAfter)))
dataAfter["reviews.rating"] = dataAfter["reviews.rating"].astype(int)
split = StratifiedShuffleSplit(n_splits= 5, test_size = 0.2)
for train_index, test_index in split.split(dataAfter, dataAfter["reviews.rating"]):
strat_train = dataAfter.reindex(train_index)
strat_test = dataAfter.reindex(test_index)
##checking the data set
print(len(strat_train))
print(len(strat_test))
print(strat_test["reviews.rating"].value_counts()/len(strat_test))
###Output
27701
6926
5.0 0.685244
4.0 0.250650
3.0 0.041294
1.0 0.010973
2.0 0.010829
Name: reviews.rating, dtype: float64
###Markdown
Data Exploration
###Code
reviews = strat_train.copy()
reviews.head()
print(len(reviews["name"].unique()), len(reviews["asins"].unique()))
print(reviews.info())
print(reviews.groupby("asins")["name"].unique())
## the different names for the specific product that have 2 ASINs
different_names = reviews[reviews["asins"] ==
"B00L9EPT8O,B01E6AO69U"]["name"].unique()
for name in different_names:
print(name)
print(reviews[reviews["asins"] == "B00L9EPT8O,B01E6AO69U"]["name"].value_counts())
fig = plt.figure(figsize=(16,10))
ax1 = plt.subplot(211)
ax2 = plt.subplot(212, sharex = ax1)
reviews["asins"].value_counts().plot(kind="bar", ax=ax1, title="ASIN Frequency")
np.log10(reviews["asins"].value_counts()).plot(kind="bar", ax=ax2,
title="ASIN Frequency (Log10 Adjusted)")
plt.show()
###Output
_____no_output_____
###Markdown
Entire training dataset avg rating
###Code
print(reviews["reviews.rating"].mean())
asins_count_ix = reviews["asins"].value_counts().index
plt.subplots(2,1,figsize=(16,12))
plt.subplot(2,1,1)
reviews["asins"].value_counts().plot(kind="bar", title="ASIN Frequency")
plt.subplot(2,1,2)
sns.pointplot(x="asins", y="reviews.rating", order=asins_count_ix, data=reviews)
plt.xticks(rotation=90)
plt.show()
###Output
4.58453477868112
###Markdown
Sentiment Analysis
###Code
# we will build a classifier that can determine review's sentiment
def sentiments(rating):
if (rating == 5) or (rating== 4):
return "Positive"
elif rating == 3:
return "Neutral"
elif (rating == 2) or (rating == 1):
return "Negative"
## add the sentiments
strat_train["sentiment"] = strat_train["reviews.rating"].apply(sentiments)
strat_test["sentiment"] = strat_test["reviews.rating"].apply(sentiments)
print(strat_train["sentiment"][:20])
###Output
28069 Positive
25144 Positive
12052 Positive
23643 Positive
27878 Positive
30424 Positive
26630 Positive
9888 Positive
29275 Positive
4631 Positive
9733 Positive
21215 Positive
8852 Neutral
18387 Positive
10795 Positive
3938 Positive
5256 Positive
18288 Positive
23776 Positive
26600 Positive
Name: sentiment, dtype: object
|
section_5/04_decision_tree.ipynb | ###Markdown
決定木決定木(Decision Tree)では、木の枝のような構造を用いて分類を行います。 学習結果を視覚化が可能で、ルールを明確に表記できるというメリットがあります。 ●データセットの読み込み今回は、Irisデータセットを使用します。以下はこのデータセットの説明変数です。 * sepal length (cm): がくの長さ * sepal width (cm): がくの幅 * petal length (cm): 花弁の長さ * petal width (cm): 花弁の幅 目的変数classは0から2の整数で、花の品種を表します。
###Code
import numpy as np
from sklearn.datasets import load_iris
iris = load_iris()
###Output
_____no_output_____
###Markdown
●決定木の実装`tree.DecisionTreeClassifier`により決定木のモデルを作成します。
###Code
from sklearn import tree
model = tree.DecisionTreeClassifier(max_depth=3)
###Output
_____no_output_____
###Markdown
fitメソッドにより訓練が行われ、決定木が構築されます。
###Code
model = model.fit(iris.data, iris.target)
###Output
_____no_output_____
###Markdown
`predict`メソッドにより予測を行い、正解率を測定します。
###Code
predicted = model.predict(iris.data)
print("正解率:", sum(predicted == iris.target) / len(iris.target))
###Output
_____no_output_____
###Markdown
`graphviz`と`pydotplus`を使い、決定木を可視化します。
###Code
import graphviz
import pydotplus
from IPython.display import Image
dot_str = tree.export_graphviz(
model,
feature_names=iris.feature_names,
out_file=None,
filled=True,
rounded=True
)
graph = pydotplus.graph_from_dot_data(dot_str)
file_name = "iris_tree.png"
graph.write_png(file_name)
Image(file_name)
###Output
_____no_output_____ |
3_Tensorflow_programming_model.ipynb | ###Markdown
TensorflowWhen starting off with deep learning, one of the first questions to ask is, which framework to learn?Common choices include TensorFlow, PyTorch, and Keras. All of these choices have their own pros and cons and have their own way of doing things.> From [**The Anatomy of Deep Learning Frameworks**](https://medium.com/@gokul_uf/the-anatomy-of-deep-learning-frameworks-46e2a7af5e47.3ywhrk1st)> The core components of a deep learning framework we must consider are:> + How **Tensor Objects** are defined. At the heart of the framework is the tensor object. A tensor is a generalization of a matrix to n-dimensions. We need a Tensor Object that supports storing the data in form of tensors. Not just that, we would like the object to be able to convert other data types (images, text, video) into tensors and back, supporting indexing, overloading operators, having a space efficient way to store the data and so on.+ How **Operations** on the Tensor Object are defined. A neural network can be considered as a series of Operations performed on an input tensor to give an output. + The use of a **Computation Graph and its Optimizations**. Instead of implementing operations as functions, they are usually implemented as **classes**. This allows us to store more information about the operation like calculated shape of the output (useful for sanity checks), how to compute the gradient or the gradient itself (for the auto-differentiation), have ways to be able to decide whether to compute the op on GPU or CPU and so on. The power of neural networks lies in the ability to chain multiple operations to form a powerful approximator. Therefore, the standard use case is that you can initialize a tensor, perform actions after actions on them and finally interpret the resulting tensor as labels or real values. Unfortunately, as you chain more and more operations together, several issues arise that can drastically slow down your code and introduce bugs as well. There are more such issues and it becomes necessary to be able to get a bigger picture to even notice that these issues exist. We need a way to optimize the resultant chain of operations for both space and time. A Computation Graph which is basically an object that contains links to the instances of various Ops and the relations between which operation takes the output of which operation as well as additional information. + The use of **Auto-differentiation** tools. Another benefit of having the computational graph is that calculating gradients used in the learning phase becomes modular and straightforward to compute. + The use of **BLAS/cuBLAS and cuDNN** extensions for maximizing performance. BLAS or Basic Linear Algebra Subprograms are a collection of optimized matrix operations, initially written in Fortran. These can be leveraged to do very fast matrix (tensor) operations and can provide significant speedups. There are many other software packages like Intel MKL, ATLAS which also perform similar functions. BLAS packages are usually optimized assuming that the instructions will be run on a CPU. In the deep learning situation, this is not the case and BLAS may not be able to fully exploit the parallelism offered by GPUs. To solve this issue, NVIDIA has released cuBLAS which is optimized for GPUs. This is now included with the CUDA toolkit. The computational model for Tensorflow (`tf`) is a **directed graph**.**Nodes** are *functions* (*operations* in `tf` terminology) and **edges** are *tensors*. **Tensor** are multidimensional data arrays. $$f(a,b) = (a*b) + (a+b)$$There are several reasons for this design:+ The most important is that is a good way to split up computation into small, **easily differentiable** pieces. `tf` uses automatic differentiation to automatically compute the derivative of every node with respect any other node that can affect the first node's output.+ The graph is also a convenient way for distributing computation across multiple CPUs, GPUs, etc.The primary API of `tf` (written in C++) is accessed through Python. FundamentalsTensorflow approaches series of computations as a flow of data through a graph with nodes being computation units and edges being flow of Tensors (multidimensional arrays).Tensorflow builds the computation graph before it starts execution, so the computations are scheduled only when it is absolutely necessary (lazy programming).TensorFlow comes with a tool, TensorBoard, to visualize the computation graph.`tf` computation graphs are described in code with `tf` API.
###Code
import tensorflow as tf
print(tf.__version__)
###Output
1.12.0
###Markdown
> Python `with` statement (context manager) is useful when you have two related operations which you’d like to execute as a pair, with a block of code in between. The classic example is opening a file, manipulating the file, then closing it:>```pythonwith open('output.txt', 'w') as f: f.write('Hi!')>```> The above `with` statement will automatically close the file after the nested block of code. The advantage of using a `with` statement is that it is guaranteed to close the file no matter how the nested block exits.
###Code
# Basic constant operations = to assign a value to a tensor
a = tf.constant(2)
b = tf.constant(3)
c = a+b
d = a*b
e = c+d
# non interactive session
# the context manager will automatically close the session
with tf.Session() as sess:
print("a= %i" % sess.run(a))
print("b= %i" % sess.run(b))
print("(a+b)+(a*b) = %i" % sess.run(e))
###Output
a= 2
b= 3
(a+b)+(a*b) = 11
###Markdown
`sess.run(node)` executes the part of the computational graph that is needed to compute the value of `node` and only that part. While defining the graph, we are not manipulating any data, only building the nodes and symbols inside our graph.We can use `tf.get_default_graph().get_operations()` to see all the nodes in the graph.
###Code
tf.get_default_graph().get_operations()
###Output
_____no_output_____
###Markdown
You can create initialized tensors in many ways:
###Code
a = tf.zeros([2,3], tf.int32)
b = tf.ones([2,3], tf.int32)
c = tf.fill([3,3], 23.9)
d = tf.range(0,10,1)
with tf.Session() as sess:
print(sess.run(a))
print(sess.run(b))
print(sess.run(c))
print(sess.run(d))
###Output
[[0 0 0]
[0 0 0]]
[[1 1 1]
[1 1 1]]
[[23.9 23.9 23.9]
[23.9 23.9 23.9]
[23.9 23.9 23.9]]
[0 1 2 3 4 5 6 7 8 9]
###Markdown
``tf`` sequences are not iterable!We can also generate random variables:
###Code
a = tf.random_normal([2,2], 0.0, 1.0)
b = tf.random_uniform([2,2], 0.0, 1.0)
with tf.Session() as sess:
print(sess.run(a))
print(sess.run(b))
###Output
[[ 0.83202136 -1.335937 ]
[ 1.834337 -0.33183086]]
[[0.3809768 0.12681663]
[0.5305053 0.40903056]]
###Markdown
How to generate random shuffled number in tensorflow?
###Code
idx = tf.constant(20)
idx_list = tf.range(idx) # 0~19
shuffle = tf.random_shuffle(idx_list)
# in this case tf returns, in a list, two diferent results
with tf.Session() as sess:
a, b = sess.run([idx_list, shuffle])
print(a)
print(b)
###Output
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19]
[ 5 8 10 15 9 12 18 2 6 11 13 4 3 19 0 14 17 1 7 16]
###Markdown
Essentially, TensorFlow computation graph contains the following parts:+ **Placeholders**, variables used in place of inputs to feed to the graph+ **Variables**, model variables that are going to be optimized to make model perform better+ **Model**, a mathematical function that calculates output based on placeholder and model variablesWhen used as an optimization engine, we will also have:+ **Loss Measure**, guide for optimization of model variables+ **Optimization Method**, update method for tuning model variables
###Code
# Basic operations with variable graph input
a = tf.placeholder(tf.int16)
b = tf.placeholder(tf.int16)
c = tf.add(a,b)
d = tf.multiply(a,b)
e = tf.add(c,d)
values = feed_dict={a: 5, b: 3}
# non interactive session
with tf.Session() as sess:
print('a = %i' % sess.run(a, values))
print('b = %i' % sess.run(b, values))
print("(a+b)+(a*b) = %i" % sess.run(e, values))
###Output
a = 5
b = 3
(a+b)+(a*b) = 23
###Markdown
A computational graph is a series of functions chained together, each passing its output to zero, one or more functions further along the chain.In this way we can construct very complex transformations on data by using a library of simple functions.Nodes represent some sort of computation being done in the graph context.Edges are the actual values (tensors) that get passed to and from nodes.+ The values flowing into the graph can come from different sources: from a different graph, from a file, entered by the client, etc. The *input* nodes simply pass on values given to them.+ The other nodes take values, apply an operation and output their result. Values running on edges are tensors:
###Code
# Basic operations with variable as graph input
a = tf.placeholder(tf.int16,shape=[2])
b = tf.placeholder(tf.int16,shape=[2])
c = tf.add(a,b)
d = tf.multiply(a,b)
e = tf.add(c,d)
variables = feed_dict={a: [2,2], b: [3,3]}
# non interactive session
with tf.Session() as sess:
print(sess.run(a, variables))
print(sess.run(b, variables))
print(sess.run(e, variables))
###Output
[2 2]
[3 3]
[11 11]
###Markdown
ExerciseImplement this computational graph:
###Code
# your code here
###Output
_____no_output_____
###Markdown
There are certein connections between nodes that are not allowed: you cannot create **circular dependencies**.> Dependency: Any node `A` that is required for the computation of a later node `B` is said to be a **dependency** of `B`.The main reason is that dependencies create endless feedback loops.There is one exception to this rule: *recurrent neural networks*. In this case `tf` simulate this kind of dependences by copying a **finite** number of versions of the graph, placing them side-by-side, and feeding them into another sequence. This process is referred as **unrolling** the graph.Keeping track of dependencies is a basic feature of `tf`. Let's suppose that we want to compute the output value of the `mul` node. We can see in the unrolled graph that is not necessary to compute the full graph to get the output of that node. But how to ensure that we only compute the necessary nodes?It's pretty easy:+ Build a list for each node with all nodes it directly depends on (not indirectly!). + Initialize an empty stack, wich will eventually hold all the nodes we want to compute. + Put the node you want to get the output from. + Recursively, look at its dependency list and add to the stack the nodes it depends on, until there are no dependencies left to run and in this way we guarantee that we have all the nodes we need.The stack will be ordered in a way that we are guaranteed to be able to run each node in the stack as we iterate through it. The main thing to look out for is to keep track of nodes that were already computed and to store their value in memory. `tf` workflowsAs we have seen in previous code, `tf` workflow is a two-step process:+ Define the computation graph.+ Run the graph with data.This can be done in a *non interactive* mode or in an *interactive* mode.
###Code
# new graph definition
tf.reset_default_graph()
# we can assign a name to every node
a = tf.placeholder(tf.int32, name='input_a')
b = tf.placeholder(tf.int32, name='input_b')
c = tf.add(a,b,name='add_1')
d = tf.multiply(a,b,name='mul_1')
e = tf.add(c,d,name='add_2')
values = feed_dict={a: 5, b: 3}
# now we can run the graph in an interactive session
sess = tf.Session()
print(sess.run(e, values))
# it is our responsability to close the session
sess.close()
tf.get_default_graph().get_operations()
###Output
23
###Markdown
`tf` has a very useful tool: `tensor-board`. Let's see how to use it.
###Code
# cleaning the tf graph space
tf.reset_default_graph()
a = tf.placeholder(tf.int16, name='input_a')
b = tf.placeholder(tf.int16, name='input_b')
c = tf.add(a,b,name='add_1')
d = tf.multiply(a,b,name='mul_1')
e = tf.add(c,d,name='add_2')
values = feed_dict={a: 5, b: 3}
# now we can run the graph
# graphs are run by invoking Session objects
session = tf.Session()
# when you are passing an operation to 'run' you are
# asking to run all operations necessary to compute that node
# you can save the value of the node in a Python var
output = session.run(e, values)
print(output)
# now let's visualize the graph
# SummaryWriter is an object where we can save information
# about the execution of the computational graph
writer = tf.summary.FileWriter('my_graph', session.graph)
writer.close()
# closing interactive session
session.close()
###Output
23
###Markdown
Open a terminal, go to your working dir, and type in:`tensorboard --logdir="my_graph"`This starts a `tensorboard` server on port 6006. There, click on the `Graphs` link. You can see that each of the nodes is labeled based on the `name` parameter you passed into each operation. ExerciseImplement and visualize this graph for a constant tensor `[5,3]`:Check these functions in the `tf` official documentation (https://www.tensorflow.org/): `tf.math.reduce_prod`, `tf.math.reduce_sum`.
###Code
# your code here
###Output
_____no_output_____
###Markdown
`tf` statements `tf` input data `tf` can take several Python var types that are automatically converted to tensors:`tf.constant([5,3], name='input_a')`But `tf` has a plethora of other data types: `tf_int16`, `tf_quint8`, etc.`tf` is tightly integrated with NumPy. In fact, `tf` data types are based on those from NumPy. Tensors returned from `Session.run` are NumPy arrays. NumPy arrays is the recommended way of specifying tensors.The `shape` of tensors describe both the number of dimensions in a tensor as well as the length of each dimension. In addition to to being able to specify fixed lengths to each dimension, in some situations you can also assign a flexible length by passing in `None` as dimension's value.
###Code
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
a = tf.placeholder(tf.int16, shape=[2,2], name='input_a')
shape = tf.shape(a)
session = tf.Session()
print(session.run(shape))
session.close()
###Output
[2 2]
###Markdown
We can feed data points to placeholder by iterating through the data set:
###Code
tf.reset_default_graph()
list_a_values = [1,2,3]
a = tf.placeholder(tf.int16)
b = a * 2
with tf.Session() as sess:
for a_value in list_a_values:
print(sess.run(b,{a: a_value}))
###Output
2
4
6
###Markdown
`tf` operations`tf` overloads common mathematical operations:
###Code
import tensorflow as tf
tf.reset_default_graph()
a = tf.placeholder(tf.int16)
b = tf.placeholder(tf.int16)
c = a+b
d = a*b
e = tf.math.add(c,d) #equivalent to c+d
variables = feed_dict={a: 5, b: 3}
with tf.Session() as sess:
print("(a+b)+(a*b) = %i" % sess.run(e, variables))
###Output
(a+b)+(a*b) = 23
###Markdown
There are more [Tensorflow Operations](https://www.tensorflow.org/api_guides/python/math_ops) `tf` graphsCreating a graph is simple:```pythonimport tensorflow as tfg = tf.Graph()``` Once the graph is initialized we can attach operation to it by using the `Graph.as_default()` method:```pythonwith g.as_default(): a = tf.mul(2,3) ...````tf` automatically creates a graph at the beginning and assigns it to be the default. Thus, if not using `Graph.as_default()` any operation will be automatically placed in the default graph.Creating multiple graphs can be useful if you are defining multiple models that do not have interdependencies:```pythong1 = tf.Graph()g2 = tf.Graph()with g1.as_default(): ... with g2.as_default(): ...``` `tf` Variables`Tensor` and `Operation` objects are **immutable**, but we need a mechanism to save changing values over time (persisting during several `run`). This is accomplished with `Variable` objects, which contain mutable tensor values that persist accross multiple calls to `Session.run()`. Variables can be used anywhere you might use a tensor.`tf` has a number of helper operations to initialize variables: `tf-zeros()`, `tf_ones()`, `tf.random_uniform()`, `tf.random_normal()`, etc.`Variable` objects live in a `Graph` but their state is managed by `Session`. Because of these they need an extra step for inicialization (`tf.global_variables_initializer`):```pythonimport tensorflow as tfa = tf.Variable(3,name="my_var")b = tf.add(5,a)with tf.Session() as sess: sess.run(tf.global_variables_initializer()) ...```In order to chage the value of a `Variable` we can use the `Variable.assign()` method:
###Code
import tensorflow as tf
a = tf.Variable(3,name="my_var")
b = a.assign(tf.multiply(2,a))
# The statement a.assign(...) does not actually assign any to a,
# but rather creates a tf.Operation that you have to explicitly
# run to update the variable.
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print("a:", a.eval()) # variables are objects, not ops.
print("b:", sess.run(b))
print("b:", sess.run(b))
print("b:", sess.run(b))
print("a:", a.eval())
tf.reset_default_graph()
a = tf.Variable(3,name="my_var")
b = a.assign(tf.multiply(2,a))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(a.eval())
print(b.eval())
###Output
3
6
###Markdown
We can increment and decrement variables:
###Code
import tensorflow as tf
a = tf.Variable(3,name="my_var")
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(a.assign_add(1)))
print(sess.run(a.assign_sub(1)))
print(sess.run(a.assign_sub(1)))
sess.run(tf.global_variables_initializer())
print(sess.run(a))
###Output
4
3
2
3
###Markdown
Some classes of `tf` (f.e. `Optimizer`) are able to automatically change variable values without explicitely asking to do so. Tensorflow sessions maintain values separately, each `Session` can have its own current value for a variable defined in the graph:
###Code
tf.global_variables_initializer()
a = tf.Variable(10)
sess1 = tf.Session()
sess2 = tf.Session()
sess1.run(tf.global_variables_initializer())
sess2.run(tf.global_variables_initializer())
print(sess1.run(a.assign_add(10)))
print(sess2.run(a.assign_sub(2)))
sess1.close()
sess2.close()
###Output
20
8
###Markdown
`tf` name scopes`tf` offers a tool to help organize your graphs: name scopes.Name scopes allows you to group operations into larger, named blocks. This is very usefu to visualize complex models with `tensorboard`.
###Code
import tensorflow as tf
tf.reset_default_graph()
with tf.name_scope("Scope_A"):
a = tf.add(1, 2, name="A_add")
b = tf.multiply(a, 3, name="A_mul")
with tf.name_scope("Scope_B"):
c = tf.add(4, 5, name="B_add")
d = tf.multiply(c, 6, name="B_mul")
e = tf.add(b, d, name="output")
writer = tf.summary.FileWriter('./name_scope_1', graph=tf.get_default_graph())
writer.close()
with tf.Session() as sess:
print(sess.run(e))
sess.close()
###Output
63
###Markdown
We can start `tensorboard` to see the graph: `tensorboard --logdir="./name_scope_1"`.You can expand the name scope boxes by clicking `+`. ExerciseLet's built and visualize and complex model:+ Our inputs will be placeholders.+ The model will take in a single vector of any lenght.+ The graph will be segmented in name scopes.+ We will accumulate the total value of all outputs over time.+ At each run, we are going to save the output of the graph, the accumulated total of all outputs, and the average value of all outputs to disk for use in `tensorboard`.
###Code
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
# Explicitly create a Graph object
graph = tf.Graph()
with graph.as_default():
with tf.name_scope("variables"):
# your code here
# Primary transformation Operations
with tf.name_scope("transformation"):
# Separate input layer
with tf.name_scope("input"):
# your code here
# Separate middle layer
with tf.name_scope("intermediate_layer"):
# your code here
# Separate output layer
# your code here
with tf.name_scope("update"):
# Increments the total_output Variable by the latest output
# your code here
# Summary Operations
with tf.name_scope("summaries"):
# Calculating average (avg = total/steps)
avg = tf.div(update_total, tf.cast(increment_step, tf.float32), name="average")
# Creates summaries for output node
tf.summary.scalar("output_summary", output)
tf.summary.scalar("total_summary", update_total)
tf.summary.scalar("average_summary", avg)
# Global Variables and Operations
with tf.name_scope("global_ops"):
# Initialization Op
init = tf.global_variables_initializer()
# Merge all summaries[…]
merged_summaries = tf.summary.merge_all()
# Start a Session, using the explicitly created Graph
sess = tf.Session(graph=graph)
# Open a SummaryWriter to save summaries
writer = tf.summary.FileWriter('./improved_graph', graph)
# Initialize Variables
sess.run(init)
###Output
_____no_output_____
###Markdown
Let's write a function to run the graph several times:
###Code
def run_graph(input_tensor):
"""
Helper function; runs the graph with given input tensor and saves summaries
"""
feed_dict = {a: input_tensor}
out, step, summary = sess.run([output, increment_step, merged_summaries],
feed_dict=feed_dict)
writer.add_summary(summary, global_step=step)
# Run the graph with various inputs
run_graph([2,8])
run_graph([3,1,3,3])
run_graph([8])
run_graph([1,2,3])
run_graph([11,4])
run_graph([4,1])
run_graph([7,3,1])
run_graph([6,3])
run_graph([0,2])
run_graph([4,5,6])
# Write the summaries to disk
writer.flush()
# Close the SummaryWriter
writer.close()
# Close the session
sess.close()
###Output
_____no_output_____
###Markdown
To start TensorBoard after running this code, run the following command:`tensorboard --logdir='./improved_graph'` ``tf`` eager execution``Eager`` execution is an interface to TensorFlow that provides an imperative programming style. When you enable eager execution, TensorFlow operations execute immediately; you do not execute a pre-constructed graph with ``Session.run()``.First, we must enable ``Eager`` execution. When we do this, operations will execute and return their values immediately. Some things to note:+ We will need to restart the Python kernel since we have already used TensorFlow in graph mode.+ We enable eager at program startup using: tfe.enable_eager_execution().+ Once we enable Eager with ``tfe.enable_eager_execution()``, it cannot be turned off. To get back to graph mode, start a new Python session.
###Code
import tensorflow.contrib.eager as tfe
import tensorflow as tf
tfe.enable_eager_execution()
print(tf.add(1, 2))
###Output
tf.Tensor(3, shape=(), dtype=int32)
###Markdown
TensorFlow Eager execution provides an autograd style API for automatic differentiation.
###Code
def f(x):
# f(x) = x^2 + 3
return tf.multiply(x, x) + 3
print( "f(4) = %.2f" % f(4.) )
# First order derivative
df = tfe.gradients_function(f)
print( "df(4) = %.2f" % df(4.)[0] )
# Second order derivative
d2f = tfe.gradients_function(lambda x: df(x)[0])
print( "d2f(4) = %.2f" % d2f(4.)[0] )
###Output
f(4) = 19.00
df(4) = 8.00
d2f(4) = 2.00
###Markdown
TensorflowWhen starting off with deep learning, one of the first questions to ask is, which framework to learn?Common choices include TensorFlow, PyTorch, and Keras. All of these choices have their own pros and cons and have their own way of doing things.> From [**The Anatomy of Deep Learning Frameworks**](https://medium.com/@gokul_uf/the-anatomy-of-deep-learning-frameworks-46e2a7af5e47.3ywhrk1st)> The core components of a deep learning framework we must consider are:> + How **Tensor Objects** are defined. At the heart of the framework is the tensor object. A tensor is a generalization of a matrix to n-dimensions. We need a Tensor Object that supports storing the data in form of tensors. Not just that, we would like the object to be able to convert other data types (images, text, video) into tensors and back, supporting indexing, overloading operators, having a space efficient way to store the data and so on.+ How **Operations** on the Tensor Object are defined. A neural network can be considered as a series of Operations performed on an input tensor to give an output. + The use of a **Computation Graph and its Optimizations**. Instead of implementing operations as functions, they are usually implemented as **classes**. This allows us to store more information about the operation like calculated shape of the output (useful for sanity checks), how to compute the gradient or the gradient itself (for the auto-differentiation), have ways to be able to decide whether to compute the op on GPU or CPU and so on. The power of neural networks lies in the ability to chain multiple operations to form a powerful approximator. Therefore, the standard use case is that you can initialize a tensor, perform actions after actions on them and finally interpret the resulting tensor as labels or real values. Unfortunately, as you chain more and more operations together, several issues arise that can drastically slow down your code and introduce bugs as well. There are more such issues and it becomes necessary to be able to get a bigger picture to even notice that these issues exist. We need a way to optimize the resultant chain of operations for both space and time. A Computation Graph which is basically an object that contains links to the instances of various Ops and the relations between which operation takes the output of which operation as well as additional information. + The use of **Auto-differentiation** tools. Another benefit of having the computational graph is that calculating gradients used in the learning phase becomes modular and straightforward to compute. + The use of **BLAS/cuBLAS and cuDNN** extensions for maximizing performance. BLAS or Basic Linear Algebra Subprograms are a collection of optimized matrix operations, initially written in Fortran. These can be leveraged to do very fast matrix (tensor) operations and can provide significant speedups. There are many other software packages like Intel MKL, ATLAS which also perform similar functions. BLAS packages are usually optimized assuming that the instructions will be run on a CPU. In the deep learning situation, this is not the case and BLAS may not be able to fully exploit the parallelism offered by GPUs. To solve this issue, NVIDIA has released cuBLAS which is optimized for GPUs. This is now included with the CUDA toolkit. The computational model for Tensorflow (`tf`) is a **directed graph**.**Nodes** are *functions* (*operations* in `tf` terminology) and **edges** are *tensors*. **Tensor** are multidimensional data arrays. $$f(a,b) = (a*b) + (a+b)$$There are several reasons for this design:+ The most important is that is a good way to split up computation into small, **easily differentiable** pieces. `tf` uses automatic differentiation to automatically compute the derivative of every node with respect any other node that can affect the first node's output.+ The graph is also a convenient way for distributing computation across multiple CPUs, GPUs, etc.The primary API of `tf` (written in C++) is accessed through Python. FundamentalsTensorflow approaches series of computations as a flow of data through a graph with nodes being computation units and edges being flow of Tensors (multidimensional arrays).Tensorflow builds the computation graph before it starts execution, so the computations are scheduled only when it is absolutely necessary (lazy programming).TensorFlow comes with a tool, TensorBoard, to visualize the computation graph.`tf` computation graphs are described in code with `tf` API.
###Code
import tensorflow as tf
print(tf.__version__)
###Output
1.12.0
###Markdown
> Python `with` statement (context manager) is useful when you have two related operations which you’d like to execute as a pair, with a block of code in between. The classic example is opening a file, manipulating the file, then closing it:>```pythonwith open('output.txt', 'w') as f: f.write('Hi!')>```> The above `with` statement will automatically close the file after the nested block of code. The advantage of using a `with` statement is that it is guaranteed to close the file no matter how the nested block exits.
###Code
# Basic constant operations = to assign a value to a tensor
a = tf.constant(2)
b = tf.constant(3)
c = a+b
d = a*b
e = c+d
# non interactive session
# the context manager will automatically close the session
with tf.Session() as sess:
print("a= %i" % sess.run(a))
print("b= %i" % sess.run(b))
print("(a+b)+(a*b) = %i" % sess.run(e))
###Output
a= 2
b= 3
(a+b)+(a*b) = 11
###Markdown
`sess.run(node)` executes the part of the computational graph that is needed to compute the value of `node` and only that part. While defining the graph, we are not manipulating any data, only building the nodes and symbols inside our graph.We can use `tf.get_default_graph().get_operations()` to see all the nodes in the graph.
###Code
tf.get_default_graph().get_operations()
###Output
_____no_output_____
###Markdown
You can create initialized tensors in many ways:
###Code
a = tf.zeros([2,3], tf.int32)
b = tf.ones([2,3], tf.int32)
c = tf.fill([3,3], 23.9)
d = tf.range(0,10,1)
with tf.Session() as sess:
print(sess.run(a))
print(sess.run(b))
print(sess.run(c))
print(sess.run(d))
###Output
[[0 0 0]
[0 0 0]]
[[1 1 1]
[1 1 1]]
[[23.9 23.9 23.9]
[23.9 23.9 23.9]
[23.9 23.9 23.9]]
[0 1 2 3 4 5 6 7 8 9]
###Markdown
``tf`` sequences are not iterable!We can also generate random variables:
###Code
a = tf.random_normal([2,2], 0.0, 1.0)
b = tf.random_uniform([2,2], 0.0, 1.0)
with tf.Session() as sess:
print(sess.run(a))
print(sess.run(b))
###Output
[[ 0.83202136 -1.335937 ]
[ 1.834337 -0.33183086]]
[[0.3809768 0.12681663]
[0.5305053 0.40903056]]
###Markdown
How to generate random shuffled number in tensorflow?
###Code
idx = tf.constant(20)
idx_list = tf.range(idx) # 0~19
shuffle = tf.random_shuffle(idx_list)
# in this case tf returns, in a list, two diferent results
with tf.Session() as sess:
a, b = sess.run([idx_list, shuffle])
print(a)
print(b)
###Output
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19]
[ 5 8 10 15 9 12 18 2 6 11 13 4 3 19 0 14 17 1 7 16]
###Markdown
Essentially, TensorFlow computation graph contains the following parts:+ **Placeholders**, variables used in place of inputs to feed to the graph+ **Variables**, model variables that are going to be optimized to make model perform better+ **Model**, a mathematical function that calculates output based on placeholder and model variablesWhen used as an optimization engine, we will also have:+ **Loss Measure**, guide for optimization of model variables+ **Optimization Method**, update method for tuning model variables
###Code
# Basic operations with variable graph input
a = tf.placeholder(tf.int16)
b = tf.placeholder(tf.int16)
c = tf.add(a,b)
d = tf.multiply(a,b)
e = tf.add(c,d)
values = feed_dict={a: 5, b: 3}
# non interactive session
with tf.Session() as sess:
print('a = %i' % sess.run(a, values))
print('b = %i' % sess.run(b, values))
print("(a+b)+(a*b) = %i" % sess.run(e, values))
###Output
a = 5
b = 3
(a+b)+(a*b) = 23
###Markdown
A computational graph is a series of functions chained together, each passing its output to zero, one or more functions further along the chain.In this way we can construct very complex transformations on data by using a library of simple functions.Nodes represent some sort of computation being done in the graph context.Edges are the actual values (tensors) that get passed to and from nodes.+ The values flowing into the graph can come from different sources: from a different graph, from a file, entered by the client, etc. The *input* nodes simply pass on values given to them.+ The other nodes take values, apply an operation and output their result. Values running on edges are tensors:
###Code
# Basic operations with variable as graph input
a = tf.placeholder(tf.int16,shape=[2])
b = tf.placeholder(tf.int16,shape=[2])
c = tf.add(a,b)
d = tf.multiply(a,b)
e = tf.add(c,d)
variables = feed_dict={a: [2,2], b: [3,3]}
# non interactive session
with tf.Session() as sess:
print(sess.run(a, variables))
print(sess.run(b, variables))
print(sess.run(e, variables))
###Output
[2 2]
[3 3]
[11 11]
###Markdown
ExerciseImplement this computational graph:
###Code
# your code here
###Output
_____no_output_____
###Markdown
There are certein connections between nodes that are not allowed: you cannot create **circular dependencies**.> Dependency: Any node `A` that is required for the computation of a later node `B` is said to be a **dependency** of `B`.The main reason is that dependencies create endless feedback loops.There is one exception to this rule: *recurrent neural networks*. In this case `tf` simulate this kind of dependences by copying a **finite** number of versions of the graph, placing them side-by-side, and feeding them into another sequence. This process is referred as **unrolling** the graph.Keeping track of dependencies is a basic feature of `tf`. Let's suppose that we want to compute the output value of the `mul` node. We can see in the unrolled graph that is not necessary to compute the full graph to get the output of that node. But how to ensure that we only compute the necessary nodes?It's pretty easy:+ Build a list for each node with all nodes it directly depends on (not indirectly!). + Initialize an empty stack, wich will eventually hold all the nodes we want to compute. + Put the node you want to get the output from. + Recursively, look at its dependency list and add to the stack the nodes it depends on, until there are no dependencies left to run and in this way we guarantee that we have all the nodes we need.The stack will be ordered in a way that we are guaranteed to be able to run each node in the stack as we iterate through it. The main thing to look out for is to keep track of nodes that were already computed and to store their value in memory. `tf` workflowsAs we have seen in previous code, `tf` workflow is a two-step process:+ Define the computation graph.+ Run the graph with data.This can be done in a *non interactive* mode or in an *interactive* mode.
###Code
# new graph definition
tf.reset_default_graph()
# we can assign a name to every node
a = tf.placeholder(tf.int32, name='input_a')
b = tf.placeholder(tf.int32, name='input_b')
c = tf.add(a,b,name='add_1')
d = tf.multiply(a,b,name='mul_1')
e = tf.add(c,d,name='add_2')
values = feed_dict={a: 5, b: 3}
# now we can run the graph in an interactive session
sess = tf.Session()
print(sess.run(e, values))
# it is our responsability to close the session
sess.close()
tf.get_default_graph().get_operations()
###Output
23
###Markdown
`tf` has a very useful tool: `tensor-board`. Let's see how to use it.
###Code
# cleaning the tf graph space
tf.reset_default_graph()
a = tf.placeholder(tf.int16, name='input_a')
b = tf.placeholder(tf.int16, name='input_b')
c = tf.add(a,b,name='add_1')
d = tf.multiply(a,b,name='mul_1')
e = tf.add(c,d,name='add_2')
values = feed_dict={a: 5, b: 3}
# now we can run the graph
# graphs are run by invoking Session objects
session = tf.Session()
# when you are passing an operation to 'run' you are
# asking to run all operations necessary to compute that node
# you can save the value of the node in a Python var
output = session.run(e, values)
print(output)
# now let's visualize the graph
# SummaryWriter is an object where we can save information
# about the execution of the computational graph
writer = tf.summary.FileWriter('my_graph', session.graph)
writer.close()
# closing interactive session
session.close()
###Output
23
###Markdown
Open a terminal, go to your working dir, and type in:`tensorboard --logdir="my_graph"`This starts a `tensorboard` server on port 6006. There, click on the `Graphs` link. You can see that each of the nodes is labeled based on the `name` parameter you passed into each operation. ExerciseImplement and visualize this graph for a constant tensor `[5,3]`:Check these functions in the `tf` official documentation (https://www.tensorflow.org/): `tf.math.reduce_prod`, `tf.math.reduce_sum`.
###Code
# your code here
###Output
_____no_output_____
###Markdown
`tf` statements `tf` input data `tf` can take several Python var types that are automatically converted to tensors:`tf.constant([5,3], name='input_a')`But `tf` has a plethora of other data types: `tf_int16`, `tf_quint8`, etc.`tf` is tightly integrated with NumPy. In fact, `tf` data types are based on those from NumPy. Tensors returned from `Session.run` are NumPy arrays. NumPy arrays is the recommended way of specifying tensors.The `shape` of tensors describe both the number of dimensions in a tensor as well as the length of each dimension. In addition to to being able to specify fixed lengths to each dimension, in some situations you can also assign a flexible length by passing in `None` as dimension's value.
###Code
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
a = tf.placeholder(tf.int16, shape=[2,2], name='input_a')
shape = tf.shape(a)
session = tf.Session()
print(session.run(shape))
session.close()
###Output
[2 2]
###Markdown
We can feed data points to placeholder by iterating through the data set:
###Code
tf.reset_default_graph()
list_a_values = [1,2,3]
a = tf.placeholder(tf.int16)
b = a * 2
with tf.Session() as sess:
for a_value in list_a_values:
print(sess.run(b,{a: a_value}))
###Output
2
4
6
###Markdown
`tf` operations`tf` overloads common mathematical operations:
###Code
import tensorflow as tf
tf.reset_default_graph()
a = tf.placeholder(tf.int16)
b = tf.placeholder(tf.int16)
c = a+b
d = a*b
e = tf.math.add(c,d) #equivalent to c+d
variables = feed_dict={a: 5, b: 3}
with tf.Session() as sess:
print("(a+b)+(a*b) = %i" % sess.run(e, variables))
###Output
(a+b)+(a*b) = 23
###Markdown
There are more [Tensorflow Operations](https://www.tensorflow.org/api_guides/python/math_ops) `tf` graphsCreating a graph is simple:```pythonimport tensorflow as tfg = tf.Graph()``` Once the graph is initialized we can attach operation to it by using the `Graph.as_default()` method:```pythonwith g.as_default(): a = tf.mul(2,3) ...````tf` automatically creates a graph at the beginning and assigns it to be the default. Thus, if not using `Graph.as_default()` any operation will be automatically placed in the default graph.Creating multiple graphs can be useful if you are defining multiple models that do not have interdependencies:```pythong1 = tf.Graph()g2 = tf.Graph()with g1.as_default(): ... with g2.as_default(): ...``` `tf` Variables`Tensor` and `Operation` objects are **immutable**, but we need a mechanism to save changing values over time (persisting during several `run`). This is accomplished with `Variable` objects, which contain mutable tensor values that persist accross multiple calls to `Session.run()`. Variables can be used anywhere you might use a tensor.`tf` has a number of helper operations to initialize variables: `tf-zeros()`, `tf_ones()`, `tf.random_uniform()`, `tf.random_normal()`, etc.`Variable` objects live in a `Graph` but their state is managed by `Session`. Because of these they need an extra step for inicialization (`tf.global_variables_initializer`):```pythonimport tensorflow as tfa = tf.Variable(3,name="my_var")b = tf.add(5,a)with tf.Session() as sess: sess.run(tf.global_variables_initializer()) ...```In order to chage the value of a `Variable` we can use the `Variable.assign()` method:
###Code
import tensorflow as tf
a = tf.Variable(3,name="my_var")
b = a.assign(tf.multiply(2,a))
# The statement a.assign(...) does not actually assign any to a,
# but rather creates a tf.Operation that you have to explicitly
# run to update the variable.
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print("a:", a.eval()) # variables are objects, not ops.
print("b:", sess.run(b))
print("b:", sess.run(b))
print("b:", sess.run(b))
print("a:", a.eval())
tf.reset_default_graph()
a = tf.Variable(3,name="my_var")
b = a.assign(tf.multiply(2,a))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(a.eval())
print(b.eval())
###Output
3
6
###Markdown
We can increment and decrement variables:
###Code
import tensorflow as tf
a = tf.Variable(3,name="my_var")
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(a.assign_add(1)))
print(sess.run(a.assign_sub(1)))
print(sess.run(a.assign_sub(1)))
sess.run(tf.global_variables_initializer())
print(sess.run(a))
###Output
4
3
2
3
###Markdown
Some classes of `tf` (f.e. `Optimizer`) are able to automatically change variable values without explicitely asking to do so. Tensorflow sessions maintain values separately, each `Session` can have its own current value for a variable defined in the graph:
###Code
tf.global_variables_initializer()
a = tf.Variable(10)
sess1 = tf.Session()
sess2 = tf.Session()
sess1.run(tf.global_variables_initializer())
sess2.run(tf.global_variables_initializer())
print(sess1.run(a.assign_add(10)))
print(sess2.run(a.assign_sub(2)))
sess1.close()
sess2.close()
###Output
20
8
###Markdown
`tf` name scopes`tf` offers a tool to help organize your graphs: name scopes.Name scopes allows you to group operations into larger, named blocks. This is very usefu to visualize complex models with `tensorboard`.
###Code
import tensorflow as tf
tf.reset_default_graph()
with tf.name_scope("Scope_A"):
a = tf.add(1, 2, name="A_add")
b = tf.multiply(a, 3, name="A_mul")
with tf.name_scope("Scope_B"):
c = tf.add(4, 5, name="B_add")
d = tf.multiply(c, 6, name="B_mul")
e = tf.add(b, d, name="output")
writer = tf.summary.FileWriter('./name_scope_1', graph=tf.get_default_graph())
writer.close()
with tf.Session() as sess:
print(sess.run(e))
sess.close()
###Output
63
###Markdown
We can start `tensorboard` to see the graph: `tensorboard --logdir="./name_scope_1"`.You can expand the name scope boxes by clicking `+`. ExerciseLet's built and visualize and complex model:+ Our inputs will be placeholders.+ The model will take in a single vector of any lenght.+ The graph will be segmented in name scopes.+ We will accumulate the total value of all outputs over time.+ At each run, we are going to save the output of the graph, the accumulated total of all outputs, and the average value of all outputs to disk for use in `tensorboard`.
###Code
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
# Explicitly create a Graph object
graph = tf.Graph()
with graph.as_default():
with tf.name_scope("variables"):
# your code here
# Primary transformation Operations
with tf.name_scope("transformation"):
# Separate input layer
with tf.name_scope("input"):
# your code here
# Separate middle layer
with tf.name_scope("intermediate_layer"):
# your code here
# Separate output layer
# your code here
with tf.name_scope("update"):
# Increments the total_output Variable by the latest output
# your code here
# Summary Operations
with tf.name_scope("summaries"):
# Calculating average (avg = total/steps)
avg = tf.div(update_total, tf.cast(increment_step, tf.float32), name="average")
# Creates summaries for output node
tf.summary.scalar("output_summary", output)
tf.summary.scalar("total_summary", update_total)
tf.summary.scalar("average_summary", avg)
# Global Variables and Operations
with tf.name_scope("global_ops"):
# Initialization Op
init = tf.global_variables_initializer()
# Merge all summaries[…]
merged_summaries = tf.summary.merge_all()
# Start a Session, using the explicitly created Graph
sess = tf.Session(graph=graph)
# Open a SummaryWriter to save summaries
writer = tf.summary.FileWriter('./improved_graph', graph)
# Initialize Variables
sess.run(init)
###Output
_____no_output_____
###Markdown
Let's write a function to run the graph several times:
###Code
def run_graph(input_tensor):
"""
Helper function; runs the graph with given input tensor and saves summaries
"""
feed_dict = {a: input_tensor}
out, step, summary = sess.run([output, increment_step, merged_summaries],
feed_dict=feed_dict)
writer.add_summary(summary, global_step=step)
# Run the graph with various inputs
run_graph([2,8])
run_graph([3,1,3,3])
run_graph([8])
run_graph([1,2,3])
run_graph([11,4])
run_graph([4,1])
run_graph([7,3,1])
run_graph([6,3])
run_graph([0,2])
run_graph([4,5,6])
# Write the summaries to disk
writer.flush()
# Close the SummaryWriter
writer.close()
# Close the session
sess.close()
###Output
_____no_output_____
###Markdown
To start TensorBoard after running this code, run the following command:`tensorboard --logdir='./improved_graph'` ``tf`` eager execution``Eager`` execution is an interface to TensorFlow that provides an imperative programming style. When you enable eager execution, TensorFlow operations execute immediately; you do not execute a pre-constructed graph with ``Session.run()``.First, we must enable ``Eager`` execution. When we do this, operations will execute and return their values immediately. Some things to note:+ We will need to restart the Python kernel since we have already used TensorFlow in graph mode.+ We enable eager at program startup using: tfe.enable_eager_execution().+ Once we enable Eager with ``tfe.enable_eager_execution()``, it cannot be turned off. To get back to graph mode, start a new Python session.
###Code
import tensorflow.contrib.eager as tfe
import tensorflow as tf
tfe.enable_eager_execution()
print(tf.add(1, 2))
###Output
tf.Tensor(3, shape=(), dtype=int32)
###Markdown
TensorFlow Eager execution provides an autograd style API for automatic differentiation.
###Code
def f(x):
# f(x) = x^2 + 3
return tf.multiply(x, x) + 3
print( "f(4) = %.2f" % f(4.) )
# First order derivative
df = tfe.gradients_function(f)
print( "df(4) = %.2f" % df(4.)[0] )
# Second order derivative
d2f = tfe.gradients_function(lambda x: df(x)[0])
print( "d2f(4) = %.2f" % d2f(4.)[0] )
###Output
f(4) = 19.00
df(4) = 8.00
d2f(4) = 2.00
|
notebooks/visualization/0.0-th-mean.ipynb | ###Markdown
Volume
###Code
DATA = Path(os.getenv('DATA'))
CONFIG = Path(os.getenv('CONFIG'))
out_dir = DATA/'nako/processed/volume'
cfg = OmegaConf.load(str(CONFIG/'volume/config.yaml'))
store = zarr.DirectoryStore(str(out_dir/'maps.zarr'))
info = pd.read_csv(cfg.dataset.info).astype({'key': str, 'age': np.float64})
h5_path = cfg.dataset.data
volume_predictions = pd.read_feather(DATA/'nako/processed/volume/predictions.feather').astype({'key': str})
df = info.join(volume_predictions.set_index('key'), on='key', how='inner')
hmap_average_mean = {'aa': None, 'am': None, 'af': None,
'ym': None, 'mm': None, 'om': None,
'yf': None, 'mf': None, 'of': None}
hmap_mean_zoomed = {'aa': None, 'am': None, 'af': None,
'ym': None, 'mm': None, 'om': None,
'yf': None, 'mf': None, 'of': None}
img = {'aa': None, 'am': None, 'af': None,
'ym': None, 'mm': None, 'om': None,
'yf': None, 'mf': None, 'of': None}
with zarr.open(store=store, mode='a') as zf:
for cat in list(img):
img[cat] = zf[f'average/image/{cat}'][:]
hmap_average_mean[cat] = zf[f'average/heatmap_mean/{cat}'][:]
hmap_mean_zoomed[cat] = zoom_heatmap(hmap_average_mean[cat], img[cat].shape, order=3)
# export results to nifti
#for c in list(img):
# nii = nib.Nifti1Image(img[c], affine)
# nib.save(nii, DATA/f'nako/processed/volume/export/img_{c}.nii.gz')
# nii = nib.Nifti1Image(hmap_mean_zoomed[c], affine)
# nib.save(nii, DATA/f'nako/processed/volume/export/hmap_mean_zoomed_{c}.nii.gz')
c = 'aa'
# set relative threshold
th = 0.6*hmap_mean_zoomed[c].max()
print(img[c].shape)
fig = plt.figure(figsize=(4, 3))
ax = fig.add_axes([0, 0, 1, 1])
ax.axis('off')
ax.imshow(np.rot90(1-img[c][:,:,50]), cmap='gray')
hmap = hmap_mean_zoomed[c][:,:,50]
hmap_masked = np.ma.array(hmap, mask=hmap<th)
#ax.imshow(np.rot90(hmap_masked), cmap='coolwarm', alpha=0.4)
ax.contour(np.rot90(hmap), levels=10,cmap='coolwarm')
ax = fig.add_axes([1, 0, 1, 1])
ax.axis('off')
ax.imshow(np.rot90(1-img[c][:,67,:]), cmap='gray')
hmap = hmap_mean_zoomed[c][:,67,:]
hmap_masked = np.ma.array(hmap, mask=hmap<th)
#ax.imshow(np.rot90(hmap_masked), cmap='coolwarm', alpha=0.4)
ax.contour(np.rot90(hmap), levels=10, cmap='coolwarm')
ax = fig.add_axes([2, 0, 1, 1])
ax.axis('off')
ax.imshow(np.rot90(1-img[c][50,:,:]), cmap='gray')
hmap = hmap_mean_zoomed[c][50,:,:]
hmap_masked = np.ma.array(hmap, mask=hmap<th)
#ax.imshow(np.rot90(hmap_masked), cmap='coolwarm', alpha=0.4)
ax.contour(np.rot90(hmap), n=10, cmap='coolwarm')
plt.savefig('Figure3.pdf', dpi=600, transparent=False, bbox_inches='tight')
###Output
(100, 125, 105)
###Markdown
Patchwise
###Code
DATA = Path(os.getenv('DATA'))
CONFIG = Path(os.getenv('CONFIG'))
out_dir = DATA/'nako/processed/patchwise'
cfg = OmegaConf.load(str(CONFIG/'volume/config.yaml'))
store = zarr.DirectoryStore(str(out_dir/'patches.zarr'))
position = np.load(CONFIG/'patchwise/patch_positions.npy')
info = pd.read_csv(cfg.dataset.info).astype({'key': str, 'age': np.float64})
position_predictions = pd.read_feather(DATA/'nako/processed/patchwise/predictions_pos.feather').astype({'key': str})
df_pos = info.join(volume_predictions.set_index('key'), on='key', how='inner')
mni_brain = nib.load('/home/raheppt1/tools/FSL/data/standard/MNI152_T1_1mm_brain.nii.gz')
affine = mni_brain.affine
mni_brain = mni_brain.get_fdata()[(slice(15,170), slice(15,200), slice(0,155))]
mni_brain.shape
hmap_average_mean = {'aa': None, 'am': None, 'af': None,
'ym': None, 'mm': None, 'om': None,
'yf': None, 'mf': None, 'of': None}
hmap_mean_zoomed = {'aa': None, 'am': None, 'af': None,
'ym': None, 'mm': None, 'om': None,
'yf': None, 'mf': None, 'of': None}
img = {'aa': None, 'am': None, 'af': None,
'ym': None, 'mm': None, 'om': None,
'yf': None, 'mf': None, 'of': None}
with zarr.open(store=store, mode='a') as zf:
for cat in list(img):
img[cat] = zf[f'average/image/{cat}'][:]
hmap_average_mean[cat] = zf[f'average/heatmap_mean/{cat}'][:]
hmap_mean_zoomed[cat] = zoom_heatmap(hmap_average_mean[cat], img[cat].shape, order=3)
# create embeddings
select = range(6)
c = 'aa'
img_embedding = np.zeros([6, 155, 185, 155])
for pos in range(6):
img_embedding[pos,
position[pos, 0]-32: position[pos, 0]+32,
position[pos, 1]-32: position[pos, 1]+32,
position[pos, 2]-32: position[pos, 2]+32] = img[c][pos, ...]
#nii = nib.Nifti1Image(img_embedding, affine)
#nib.save(nii, DATA/f'nako/processed/patchwise/export/img_{c}.nii.gz')
hmap_embedding = np.zeros([6, 155, 185, 155])
for pos in range(6):
hmap_embedding[pos,
position[pos, 0]-32: position[pos, 0]+32,
position[pos, 1]-32: position[pos, 1]+32,
position[pos, 2]-32: position[pos, 2]+32] = hmap_mean_zoomed[c][pos, ...]
#nii = nib.Nifti1Image(embedding, affine)
#nib.save(nii, DATA/f'nako/processed/patchwise/export/hmap_mean_zoomed_{c}.nii.gz')
sl = 75
#plt.imshow(1-mni_brain[:,:,sl], cmap='gray')
tmp = hmap_embedding[0,:,:,sl]
hmap_masked = np.ma.array(tmp, mask=tmp<0.1)
plt.imshow(hmap_masked, cmap='coolwarm', alpha=1.0, vmin=0.1, vmax=0.2)
plt.colorbar()
#plt.savefig('Figure4a_foreground.pdf', dpi=600, transparent=False, bbox_inches='tight')
sl = 75
#plt.imshow(1-mni_brain[:,:,sl], cmap='gray')
tmp = hmap_embedding[3,:,:,sl]
hmap_masked = np.ma.array(tmp, mask=tmp<0.1)
plt.imshow(hmap_masked, cmap='coolwarm', alpha=1.0, vmin=0.1, vmax=0.2)
plt.savefig('Figure4b_foreground.pdf', dpi=600, transparent=False, bbox_inches='tight')
sl = 75
#plt.imshow(1-mni_brain[:,:,sl], cmap='gray')
tmp = hmap_embedding[1,:,:,sl]
hmap_masked = np.ma.array(tmp, mask=tmp<0.1)
plt.imshow(hmap_masked, cmap='coolwarm', alpha=1.0, vmin=0.1, vmax=0.2)
plt.savefig('Figure4d_foreground.pdf', dpi=600, transparent=False, bbox_inches='tight')
sl = 85
plt.imshow(1-mni_brain[:,sl,:], cmap='gray')
tmp = hmap_embedding[1,:,sl,:]
hmap_masked = np.ma.array(tmp, mask=tmp<0.08)
plt.imshow(hmap_masked, cmap='coolwarm', alpha=0.6, vmin=0.08, vmax=0.2)
plt.colorbar()
###Output
_____no_output_____ |
0210_DecisionTrees.ipynb | ###Markdown
Decision Trees and Random Forest Decision Trees Motivating Decision TreesDecision trees are extremely intuitive ways of classifying data: you simply ask a series of questions in order to close in on the class label. For example, if you are into value investing, you might think of the following scheme to decide whether you invest into a certain stock or not: (P/B = Price/Book ratio, P/E = Price/Earnings ratio, PEG = Price/Earnings to growth ratio, E = Equity, D = Debt).Sources tell me this scheme is not Warren Buffet's key to success - it is obviously an overly simplified illustration. But it serves well to demonstrate the idea behind decision trees in ML. The binary splits narrow down the options. The big question here though is of course what questions we ought to ask to derive the desired answer and in what sequence. This we will discuss in the next sections - first intuitively and then in more rigorous terms. Simple Decision Trees IllustratedTo draw up a simple decision tree, we follow these two steps (James et al. (2013)): * Divide the predictor space (set of possible values for $X_1, X_2, \ldots, X_p$) into $M$ distinct and non-overlapping regions $R_1, R_2, \ldots, R_M$. * For every observation in $R_m$ we model the response as a constant. This constant is based on a majority vote among the observation in $R_m$. Let us illustrate this on a two dimensional data set with features $X_1$ and $X_2$. The color of the dots indicate the true class label. Without any split, we have but one region. A decision tree now splits the regions iteratively into predictor spaces $R_m$. This is shown below. The first figure on the left has two spaces, $R_1, R_2$, The second already has four ($R_1, R_2, R_3, R_4$) etc.. The background color indicates the label that the model would assign to a new data point in the respective area. Note that (as in the introductory decision tree figure) each branch can have a sperate number of splits. Some nodes are purer (get split more) than others. The `depth=n` argument in the figure title refers to the depth of the tree. Nodes that contain only a single class (color) are not further split. All others will be further split until a stopping criterion is reached, e.g. * all predictor spaces are pure (contain only one class), * all predictor spaces contain a limited number of data points,* we limit the number of splits through the `depth` argument etc. Mathematical Description Maximizing Information GainHow do we construct the regions $R_1, \ldots, R_M$? Or put in other words: How do we decide on the splitting variables (i.e. $X_1, X_2, \ldots, X_p$), split points and what topology (shape) the tree should have? To explain this we focus on the CART (classification and regression tree) approach of decision trees as implemented in Scikit-learn. In order to describe the mathematics let us assume we have a data set with $p$ inputs for each of the $N$ observations: $(x_i, y_i)$ for $i = 1, 2, \ldots, N$, with $x_i = (x_{i1}, x_{i2}, \ldots, x_{ip})$. Most libraries (including Scikit-learn) have implemented binary decision trees, meaning that each node is split into two child nodes. Hence for binary splits at node $m$ we take initial region $R_m$ and select $j$ (of feature $X_j$) and threshold $t_m$ such that the resulting two half-planes $$\begin{equation}R_{\text{left}}(R_m; j, t_m) = \{(X, y) \, | \, X_j < t_m \} \qquad \text{and} \qquad R_{\text{right}}(R_m; j, t_m) = \{(X, y) \, | \, X_j \geq t_m \}\end{equation}$$maximize the **information gain ($G$)** for any value of $j$ and $t_m$. Let us define $\theta = (j, t_m)$ to simplify expressions. Given $R_m$ with $N_m$ being the total number of samples in $R_m$ and $n_{\text{left}}$, $n_{\text{right}}$ the number samples in the left ($R_{\text{left}}$) and right ($R_{\text{right}}$) child nodes, respectively, the information gain function $G$ is defined as $$\begin{equation}G(R_m; \theta) = H(R_m) - \left( \frac{n_{\text{left}}}{N_m} H(R_{\text{left}}) + \frac{n_{\text{right}}}{N_m} H(R_{\text{right}})\right)\end{equation}$$Here, $H()$ is simply a measure of impurity which we will get to shortly. With that, the information gain $G$ is simply the difference between the impurity of the parent node and the sum of the child node impurities. The lower the impurity of the child nodes, the larger information gain we get (Raschka (2015)).Our optimization task can therefore be formulated as to find parameter $\theta$ that maximizes the following expression at every node $m$:$$\begin{equation}\theta^* = \arg \max_{\theta} G(R_m; \theta)\end{equation}$$This is done recursively for every node until the stopping criteria is reached (max. depth, max. number of samples in region etc.; see above). Impurity Measures for Classification TreesThere are three common impurity measures (or splitting criteria), of which only the latter two are recommended for growing decision trees: Classification error rate ($H_E$), Gini index ($H_G$), cross-entropy ($H_H$). To discuss them, let us first define the proportion of class $k$ observations in node $m$ (Friedman et al. (2001)):$$\begin{equation}\hat{p}_{m,k} = \frac{1}{N_m} \sum_{x_i \in R_m} I(y_i = k)\end{equation}$$Earlier we mentioned that all observations in region $R_j$ are assigned to the same class following a majority vote. In more formal terms this means that observations in node $m$ are classified to class $k$ for which $k(m) = \arg \max_k \hat{p}_{m,k}$. Now, the impurity measures are defined as follows:$$\begin{align}&\text{Classification Error rate: } & H_E(R_m) &= 1 - \max_k (p_{m,k}) \\&\text{Gini Index: } & H_G(R_m) &= \sum_k p_{m,k} (1 - p_{m,k}) \\&\text{Cross-Entropy: } & H_E(R_m) &= - \sum_k p_{m,k} \, \log(p_{m,k})\end{align}$$Note that for binary classification tasks (e.g. $y \in \{0, 1\}$), $\log$ in the cross-entropy is usually the [logarithm to the base 2.](https://stackoverflow.com/questions/1859554/what-is-entropy-and-information-gain) Below figure graphs the three impurity measures with respect to $p$. As mentioned, the classification error rate should not be used as an impurity measure. In practice it is primarily used to prune a tree - a concept we will not discuss here. Cross-entropy is minimal if all samples at a node belong to the same class and maximal if we have a homogeneous distribution among the classes. Therefore, cross-entropy can be understood as a criterion that attempts to maximize the mutual information in the tree. The Gini-index on the other hand works towards minimizing the probability of misclassification. Similar to entropy it is maximal if classes are evenly distributed and minimal if the vast majority of samples belong to the same class. In practice both Gini index as well as cross-entropy usually produce similar results and thus it is not advisable to spend much time on evaluating trees using different impurity criteria (Raschka (2015)). Advantages and Disadvantages of Decision TreesDecision trees have several advantages over the other classification approaches discussed so far:* Intuition is easy to explain* Trees can be displayed graphically and are easily interpreted even by a layperson* Trees handle quantitative as well as qualitative features; there's no need to scale the values* Some people argue that decision trees mirror human thinking very closely, moreso than other approaches.However, simple decision trees have also disadvantages, some of them are significant:* Instability of trees/high variance: Small changes in the data often result in very different series of splits* Decision trees tend to build complex decision boundaries, which often results in overfitting the data* Low level of predictive accuracy compared to other classification approaches discussed in this course. Decision Trees in PythonTo show how decision trees are applied in Python we once again rely on functions implemented in the Scikit-learn package. This time we will use the `Carseats` data set. It is again a data set that corresponds to James et al. (2013)'s book and contains information on child carseat sales at 400 different stores. `Sales` is the response variable with number of sold units in thousands. All other values are used as features. A detailed description of the data set can be found [here](https://cran.r-project.org/web/packages/ISLR/ISLR.pdf).
###Code
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
df = pd.read_csv('Data/Carseats.csv')
df.head()
###Output
_____no_output_____
###Markdown
Let us assume you are a financial consultant and work on a market study that discusses appropriate sales channels for child car seats. You know that your client's strategy is to grow the business. Furthermore you learned that to run sales at break even at a sales location the company needs to sell approximately 4'000 units. What feature drives sales and in which stores would you advise your client to offer their product? Here a decision tree helps very much in visualizing the sales driver. Before we implement the model we prepare the data. Notice that decision trees are invariant to scaling meaning we could, but don't have to scale values. This is true for both quantitative as well as categorical variables. What we need to do though, is transforming categorical values `ShelveLoc, Urban` and `US` into numeric values. We will use pandas `map` method to (a) show an alternative to `pd.factorize` introduced in previous chapters and (b) ensure a mapping that does not confuse (`pd.factorize()` maps a column's first entry to value 0, second to 1 etc. This would mean that `pd.factorize()` would label `Yes` in columns `Urban` or `US` as 0. Yet `Yes` is predominantly represented by a 1. Therefore, with the `map` function we preclude any confusion).
###Code
# Create 'BreakEven' column with 1 if Sales >= 4k, else 0
df['BreakEven'] = df.Sales.map(lambda x: 1 if x>=4 else 0)
# Replace category names with numbers
df.ShelveLoc = df.ShelveLoc.map({'Bad':0, 'Medium': 1, 'Good': 2})
df.Urban = df.Urban.map({'No':0, 'Yes':1})
df.US = df.US.map({'No':0, 'Yes':1})
print(df.head(3))
# Assign features & response to X and y, respectively
X = df.drop(['Sales', 'BreakEven'], axis=1)
y = df.BreakEven
# Train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
Now we are in a position to run the `DecisionTreeClassifier`.
###Code
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(max_depth=4)
tree.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Creating a decision tree is as simple as that. From the above output we see that per default the function will use the Gini index as criterion. The only argument we set is the maximal depth. Alternatively we could define the maximum tree nodes, the minimum number of samples required to split an internal node or the minimum number of samples required to be at a leaf node. There are more options and it is best to check the [documentation page](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.htmlsklearn.tree.DecisionTreeClassifier) to get a thorough understanding of the available options. As with all other Scikit-learn functions, we can again call the usual performance metrics. The interpretation is as discussed in chapter 8.
###Code
# Load metrics sublibrary
from sklearn import metrics
# Print performance metrics
print('True proportion of sales >= 4k: ', y.sum() / y.shape[0])
print('Train score: ', tree.score(X_train, y_train))
print('Test score: ', tree.score(X_test, y_test))
print(37*'-')
# Confusion matrix
y_pred = tree.predict(X_test)
print('Confusion matrix: \n',
metrics.confusion_matrix(y_test, y_pred))
# Manual confusion matrix as pandas DataFrame
confm = pd.DataFrame({'Predicted sales >=4k': y_pred,
'True sales >=4k': y_test})
confm.replace(to_replace={0:'No', 1:'Yes'}, inplace=True)
print(confm.groupby(['True sales >=4k','Predicted sales >=4k']).size().unstack('Predicted sales >=4k'))
###Output
True proportion of sales >= 4k: 0.91
Train score: 0.94375
Test score: 0.8875
-------------------------------------
Confusion matrix:
[[ 0 4]
[ 5 71]]
Predicted sales >=4k No Yes
True sales >=4k
No NaN 4.0
Yes 5.0 71.0
###Markdown
Visualizing Decision TreesVisualizing decision trees was a horrendously cumbersome task in the past. However, starting Scikit-learn version 0.21 there is now a function that plots the decision tree very easily. The following Code snippet displays the necessary steps. For further details - as usual - visit the [Scikit-learn website](https://scikit-learn.org/stable/modules/generated/sklearn.tree.plot_tree.html). Be sure to have the list of class names for parameter `class_names` in the correct order.
###Code
import matplotlib.pyplot as plt
from sklearn.tree import plot_tree
# Plot tree
plt.figure(figsize=(14, 6))
plot_tree(tree, class_names=['loss', 'profit'],
filled=True, rounded=True);
###Output
_____no_output_____
###Markdown
Decision Trees and Cross ValidationWe have mentioned above that one of the major disadvantages of decision trees is the risk of overfitting. Having discussed cross validation in the previous chapter, we ought to ask the question how cross validation can be used to prevent overfitting. The answer to this seemingly trivial question might be more complex than initially thought. If we apply a $k$-fold cross validation and generate $k$ decision tree for every fold in the training set, then these $k$ tree will most certainly look different from fold to fold. Each of the $k$ decision trees will still suffer from overfitting. Furthermore, CV does not tell us anything about which of the $k$ tree we are supposed to select for prediction, and ultimately, this ought to be our goal. If CV produces 10 different trees but doesn't tell us which one of these to choose, we have gained nothing. Choosing the one tree (among the $k$) with (for example) the lowest error rate will not do it as this approach is most probably flawed: such a model is based on even less information than what would be available through the full training set and in general models with more information beat those with less. So CV will not help us on this end. However, CV is still of use. To recapitulate the idea behind CV: Fundamentally, the purpose of cross validation is not to help select a particular instance of the $k$ decision trees but rather to qualify the model, i.e. to provide metrics such as error rate etc. which in turn can be useful in asserting the level of precision one can expect from the model. Therefore, CV comes into play when we are tuning the model to find the optimal hyperparameter. As an example: usually we do not know the optimal tree form. Does it generalize best when it has depth 2, 5, 10? Or is a stopping criterion of no more than 5, 10, 20 observation per region $R_j$ best? Here we can run different parameter in combination with CV and this, thus, will provide an answer on how well the model will generalize to new data - given the set of hyperparameter. Cross ValidationBelow we show two setups to do this: with loops or grid search. The first approach follows what we learned in the previous chapter on cross validation.
###Code
# Max depth
maxDepth = np.array([1, 2, 5, 10])
# Minimum number of samples required to split any internal node
minSamplesNode = np.array([2, 5, 10, 20])
# The minimum number of samples required to be at a leaf/terminal node
minSamplesLeaf = np.array([2, 5, 10, 20])
# Import necessary functions
from sklearn.model_selection import StratifiedKFold, cross_val_score
# Create k-Fold CV object
kFold = StratifiedKFold(n_splits=10)
# Loop through maxDept values, run CV and print results
for i in maxDepth:
tree = DecisionTreeClassifier(max_depth=i, random_state=0)
scrs = cross_val_score(tree, X_train, y_train, cv=kFold)
print('Score (depth ={0: 3.0f}): {1: .3f} +/- {2: .3f}'.format(i, np.mean(scrs), np.std(scrs)))
print(50*'-')
# Loop through minSamplesNode values, run CV and print results
for i in minSamplesNode:
tree = DecisionTreeClassifier(min_samples_split=i, random_state=0)
scrs = cross_val_score(tree, X_train, y_train, cv=kFold)
print('Score (min sample at node ={0: 3.0f}): {1: .3f} +/- {2: .3f}'.format(i, np.mean(scrs), np.std(scrs)))
print(50*'-')
# Loop through minSamplesNode values, run CV and print results
for i in minSamplesLeaf:
tree = DecisionTreeClassifier(min_samples_leaf=i, random_state=0)
scrs = cross_val_score(tree, X_train, y_train, cv=kFold)
print('Score (min sample at leaf ={0: 3.0f}): {1: .3f} +/- {2: .3f}'.format(i, np.mean(scrs), np.std(scrs)))
###Output
Score (depth = 1): 0.891 +/- 0.029
Score (depth = 2): 0.866 +/- 0.044
Score (depth = 5): 0.850 +/- 0.031
Score (depth = 10): 0.856 +/- 0.035
--------------------------------------------------
Score (min sample at node = 2): 0.856 +/- 0.035
Score (min sample at node = 5): 0.847 +/- 0.041
Score (min sample at node = 10): 0.856 +/- 0.047
Score (min sample at node = 20): 0.844 +/- 0.066
--------------------------------------------------
Score (min sample at leaf = 2): 0.841 +/- 0.045
Score (min sample at leaf = 5): 0.869 +/- 0.041
Score (min sample at leaf = 10): 0.866 +/- 0.037
Score (min sample at leaf = 20): 0.863 +/- 0.049
###Markdown
Based on the output we can conclude that we get better scores with fewer nodes. As for the min sample at a node/leaf, the differences are too small to judge. However, what we can also conclude is that this is a fairly cumbersome process. Three separate loops to find the optimal hyperparameter. Furthermore, these three loops just check for one criterion, but what if we were interested in **all possible combinations**? Maybe we get better results if we combine them - we only know if we check. And, as you should be expecting by now, there's a convenient way of doing this: via grid search. Grid SearchThe approach of grid search is fairly simple: it's a brute-force search paradigm where we specify a list of values for different hyperparameters. The algorithm evaluates the model performance for each combination of hyperparameter to obtain the optimal combination of values from this set (Raschka (2015)). As usual we use a code example to show how this works.
###Code
from sklearn.model_selection import GridSearchCV
# Define the hyperparameter values to be tested
param_grid = {'criterion': ['gini', 'entropy'],
'max_depth': maxDepth,
'min_samples_split': minSamplesNode,
'min_samples_leaf': minSamplesLeaf}
# Run brute-force grid search
gs = GridSearchCV(estimator=DecisionTreeClassifier(random_state=0),
param_grid=param_grid,
scoring='accuracy',
cv=kFold, n_jobs=-1)
gs = gs.fit(X_train, y_train)
print(gs.best_score_)
print(gs.best_params_)
###Output
0.9
{'criterion': 'gini', 'max_depth': 1, 'min_samples_leaf': 20, 'min_samples_split': 2}
###Markdown
Using the preceding code, we train and tune a `DecisionTreeClassifier` on the given parameter grid. For this we define a dictionary called `param_grid` and apply this to the `GridSearchCV`. Using the training data we obtain the score of the best-performing model via the `best_score_` attribute (here based on the accuracy measure) and the corresponding parameter via the `best_params_`. As it turns out, we are unable to increase the accuracy score by combining stopping criterion outside of the `min_samples_leaf` hyperparameter. The best result is indeed when we use a max. depth of 1 and `min_samples_split` criterion of 2. If a combination of the three criterion (or the entropy criterion) were to yield better results, it would mean that the latter two hyperparameter would have values $\geq$ than the minimum value of 1 and 2, respectively. It is important to note that this result should not deceive you to believe that overfitting with decision trees is a myth. Here we seem to come across a rare exception where a prune tree of depth 1 yields the best performance. In general, decision trees have much better training results on deeper grown trees. Hence the risk of overfitting.> Note that grid search might be a convenient and powerful way of tuning hyperparameter but because it is a brute-force approach it is computationally very expensive. Depending on the number of processors you run your script on and the task at hand this might take substantial time. If, for whatever reasons, this is not feasible, the `RandomizedSearchCV` class might be a feasible alternative. This class draws random parameter from sampling distributions with a specified budget. See [the documentation for more details](http://scikit-learn.org/stable/modules/grid_search.htmlrandomized-parameter-optimization). Finally, to estimate the performance of these parameter on the independent test dataset, we can run these three lines:
###Code
# Extract best parameter
clf = gs.best_estimator_
# Fit model given best parameter
clf.fit(X_train, y_train)
# Print out score on Test dataset
print('Test accuracy: {0: .4f}'.format(clf.score(X_test, y_test)))
###Output
Test accuracy: 0.9500
###Markdown
Random Forest Turning Weaknesses into StrengthsCV is so valuable because it provides reliable information on how well a model generalizes. Recall that given a set of $n$ independent observations $Z_1, Z_2, \ldots, Z_n$, each with variance $\sigma^2$, the variance of the mean $\bar{Z}$ of the observations is given by $\sigma^2/n$ (see appendix of the script). This shows that if we average a set of (independent) performance metrics (e.g. classification error), we actually reduce the variance of this error. And in doing so, we increase the validity of said metric. If we now extend this idea to not only assessing performance metrics but also predicting outcomes, we enter the field of **ensemble models**. These models produce $k$ independent predictions (e.g. trees) and assign the class label based on a majority vote of the $k$ outcomes. In doing so we lower the variance without compromising on the low bias of decision trees. Therefore it is easy to see that ensemble methods have proven to be extremely valuable, especially in the field of decision trees. One of the more prominent ensemble algorithm with respect to decision trees is called 'Random Forest'.Let us first define the steps of a random forest model (Raschka (2015, p. 90)):1. Draw a random bootstrap sample of size $n$ (randomly choose $n$ samples from the training set with replacement)2. Grow a decision tree from the bootstrap sample. At each node: 1. Randomly select $m$ features without replacement (with $m < p$) 2. Split the node using the feature that provides the best split according to the objective function (e.g. by maximizing the information gain)3. Repeat steps 1. and 2. $B$ times 4. Aggregate the $B$ predictions (of each tree) and assign the class label by majority voteStep two above might seem odd at first: Why would we restrict the model to only choose from a subset $m$ of features (instead of selecting from the complete set $p$)? This is best explained with James et al. (2013, p. 320):> "Suppose that there is one very strong predictor in the data set, along with a number of other moderately strong predictors. Then in the collection of bagged trees, most or all of the trees will use this strong predictor in the top split. Consequently, all of the bagged trees will look quite similar to each other. Hence the predictions from the bagged trees will be highly correlated. Unfortunately, averaging many highly correlated quantities does not lead to as large of a reduction in variance as averaging many uncorrelated quantities. [...] Random forests overcome this problem by forcing each split to consider only a subset of the predictors. Therefore, on average $(p - m)/p$ of the splits will not even consider the strong predictor, and so other predictors will have more of a chance. We can think of this process as *decorrelating* the trees, thereby making the average of the resulting trees less variable and hence more reliable."On a side note: There exists a predecessor algorithm that works similar to random forests except that it sets $m=p$ in step 2.1 per default. Python has it implement as `BaggingClassifier()`. Since random forests improves on the problem of correlated features, it is today clearly the preferred approach. For this reason we will only discuss the random forest implementation. Selecting Random Forest's HyperparameterThough the interpretability of a random forest does not meet the simplicity of a simple decision tree, a big advantage is that we do not have to worry that much about choosing good hyperparameter values. We have three primary values to set: * The size $n$ of the bootstrap (step 1)* The subset $m$ of possible features (step 2)* The number of iterations $B$ (step 3)Typically, the larger number of trees $B$, the better the performance of our random forest classifier. But this of course comes at the expense of (potentially significant) increased computational costs. Additionally, the marginal improvement decreases as the number of trees is increased, i.e. at a certain point the cost in computation time will outgrow the benefit in prediction accuracy from more trees. In the Scikit-learn implementation, this hyperparameter is steered through the `n_estimators` argument.The feature subset size ($m$) to consider at each node is typically set to $m = \sqrt{p}$, that is, the number of predictors considered at each split is approximately equal to the square root of the total number of predictors $p$. Scikit-learn uses the `max_feature` argument to control for it. Finally, via the size $n$ of the bootstrap we control the bias-variance trade-off. A large value for $n$ will decrease randomness and thus such a model is more likely to overfit. On the other hand, preventing overfitting by selecting smaller values come at the expense of the model predictive performance. And since predictive accuracy is what we are most interested in, the vast majority of random forest implementations, including the `RandomForestClassifier` implementation in Scikit-learn, have set the bootstrap sample size $n$ per default to the number of samples in the original training set. This provides a good bias-variance trade-off (Raschka (2015)). Random Forest in Scikit-LearnThere is an easily accessible implementation of a random forest classifier in Scikit-learn that we can use. For [a description of the available hyperparameter please check again the function's documentation](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html). It is also left to the reader to investigate how CV and/or grid-search can improve the performance. For this the same steps as explained for the `DecisionTreeClasifier` function can be applied.
###Code
from sklearn.ensemble import RandomForestClassifier
# Create classifier object and fit it to data
forest = RandomForestClassifier(criterion='gini', random_state=0, n_jobs=-1)
forest.fit(X_train, y_train)
# Print test score
print('Test accuracy: {0: .4f}'.format(clf.score(X_test, y_test)))
###Output
Test accuracy: 0.9500
###Markdown
Decision Trees and Random Forest Decision Trees Motivating Decision TreesDecision trees are extremely intuitive ways of classifying data: you simply ask a series of questions in order to close in on the class label. For example, if you are into value investing, you might think of the following scheme to decide whether you invest into a certain stock or not: (P/B = Price/Book ratio, P/E = Price/Earnings ratio, PEG = Price/Earnings to growth ratio, E = Equity, D = Debt).Sources tell me this scheme is not Warren Buffet's key to success - it is obviously an overly simplified illustration. But it serves well to demonstrate the idea behind decision trees in ML. The binary splits narrow down the options. The big question here though is of course what questions we ought to ask to derive the desired answer and in what sequence. This we will discuss in the next sections - first intuitively and then in more rigorous terms. Simple Decision Trees IllustratedTo draw up a simple decision tree, we follow these two steps (James et al. (2013)): * Divide the predictor space (set of possible values for $X_1, X_2, \ldots, X_p$) into $M$ distinct and non-overlapping regions $R_1, R_2, \ldots, R_M$. * For every observation in $R_m$ we model the response as a constant. This constant is based on a majority vote among the observation in $R_m$. Let us illustrate this on a two dimensional data set with features $X_1$ and $X_2$. The color of the dots indicate the class label. Without any split, we have but one region. A decision tree now splits the regions iteratively into predictor spaces $R_m$. This is shown below. The first figure on the left has two spaces, $R_1, R_2$, The second already has four ($R_1, R_2, R_3, R_4$) etc.. The background color indicates the label that the model would assign to a new data point in the respective area. Note that (as in the introductory decision tree figure) each branch can have a sperate number of splits. Some nodes are purer (get split more) than others. The `depth=n` argument in the figure title refers to the depth of the tree. Nodes that contain only a single class (color) are not further split. All others will be further split until a stopping criterion is reached, e.g. * all predictor spaces are pure (contain only one class), * all predictor spaces contain a limited number of points,* we limit the number of splits through the `depth` argument etc. Mathematical Description Maximizing Information GainHow do we construct the regions $R_1, \ldots, R_M$? Or put in other words: How do we decide on the splitting variables (i.e. $X_1, X_2, \ldots, X_p$), split points and what topology (shape) the tree should have? To explain this we focus on the CART (classification and regression tree) approach of decision trees as implemented in Scikit-learn. In order to describe the mathematics let us assume we have a data set with $p$ inputs for each of the $N$ observations: $(x_i, y_i)$ for $i = 1, 2, \ldots, N$, with $x_i = (x_{i1}, x_{i2}, \ldots, x_{ip})$. Most libraries (including Scikit-learn) have implemented binary decision trees, meaning that each node is split into two child nodes. Hence for binary splits at node $m$ we take initial region $R_m$ and select $j$ (of feature $X_j$) and threshold $t_m$ such that the resulting two half-planes $$\begin{equation}R_{\text{left}}(R_m; j, t_m) = \{(X, y) \, | \, X_j < t_m \} \qquad \text{and} \qquad R_{\text{right}}(R_m; j, t_m) = \{(X, y) \, | \, X_j \geq t_m \}\end{equation}$$maximize the **information gain ($G$)** for any value of $j$ and $t_m$. Let us define $\theta = (j, t_m)$ to simplify expressions. Given $R_m$ with $N_m$ being the total number of samples in $R_m$ and $n_{\text{left}}$, $n_{\text{right}}$ the number samples in the left ($R_{\text{left}}$) and right ($R_{\text{right}}$) child nodes, respectively, the information gain function $G$ is defined as $$\begin{equation}G(R_m; \theta) = H(R_m) - \left( \frac{n_{\text{left}}}{N_m} H(R_{\text{left}}) + \frac{n_{\text{right}}}{N_m} H(R_{\text{right}})\right)\end{equation}$$Here, $H()$ is simply a measure of impurity which we will get to shortly. With that, the information gain $G$ is simply the difference between the impurity of the parent node and the sum of the child node impurities. The lower the impurity of the child nodes, the larger information gain we get (Raschka (2015)).Our optimization task can therefore be formulated as to find parameter $\theta$ that maximizes the following expression at every node $m$:$$\begin{equation}\theta^* = \arg \max_{\theta} G(R_m; \theta)\end{equation}$$This is done recursively for every node until the stopping criteria is reached (max. depth, max. number of samples in region etc.; see above). Impurity Measures for Classification TreesThere are three common impurity measures (or splitting criteria), of which only the latter two are recommended for growing decision trees: Classification error rate ($H_E$), Gini index ($H_G$), cross-entropy ($H_H$). To discuss them, let us first define the proportion of class $k$ observations in node $m$ (Friedman et al. (2001)):$$\begin{equation}\hat{p}_{mk} = \frac{1}{N_m} \sum_{x_i \in R_m} I(y_i = k)\end{equation}$$Earlier we mentioned that all observations in region $R_j$ are assigned to the same class following a majority vote. In more formal terms this means that observations in node $m$ are classified to class $k$ for which $k(m) = \arg \max_k \hat{p}_{mk}$. Now, the impurity measures are defined as follows:$$\begin{align}&\text{Classification Error rate: } & H_E(R_m) &= 1 - \max_k (p_{mk}) \\&\text{Gini Index: } & H_G(R_m) &= \sum_k p_{mk} (1 - p_{mk}) \\&\text{Cross-Entropy: } & H_E(R_m) &= - \sum_k p_{mk} \, \log(p_{mk})\end{align}$$Note that for binary classification tasks (e.g. $y \in \{0, 1\}$), $\log$ in the cross-entropy is usually the [logarithm to the base 2.](https://stackoverflow.com/questions/1859554/what-is-entropy-and-information-gain) Below figure graphs the three impurity measures with respect to $p$. As mentioned, the classification error rate should not be used as an impurity measure. In practice it is primarily used to prune a tree - a concept we will not discuss here. Cross-entropy is minimal if all samples at a node belong to the same class and maximal if we have a uniform distribution among the classes. Therefore, cross-entropy can be understood as a criterion that attempts to maximize the mutual information in the tree. The Gini-index on the other hand works towards minimizing the probability of misclassification. Similar to entropy it is maximal if classes are evenly distributed and minimal if the vast majority of samples belong to the same class. In practice both Gini index as well as cross-entropy usually produce similar results and thus it is not advisable to spend much time on evaluating trees using different impurity criteria (Raschka (2015)). Advantages and Disadvantages of Decision TreesDecision trees have several advantages over the other classification approaches discussed so far:* Intuition is easy to explain* Trees can be displayed graphically and are easily interpreted even by a layperson* Trees handle quantitative as well as qualitative features; there's no need to scale the values* Some people argue that decision trees mirror human thinking very closely, moreso than other approaches.However, simple decision trees have also disadvantages, some of them are significant:* Instability of trees/high variance: Small changes in the data often result in very different series of splits* Decision trees tend to build complex decision boundaries, which often results in overfitting the data* Low level of predictive accuracy compared other classification approaches discussed in this course. Decision Trees in PythonTo show how decision trees are applied in Python we once again rely on functions implemented in the Scikit-learn package. This time we will use the `Carseats` data set. It is again a data set that corresponds to James et al. (2013)'s book and contains information on child carseat sales at 400 different stores. `Sales` is the response variable with number of sold units in thousands. All other values are used as features. A detailed description of the data set can be found [here](https://cran.r-project.org/web/packages/ISLR/ISLR.pdf).
###Code
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
df = pd.read_csv('Data/Carseats.csv')
df.head()
###Output
_____no_output_____
###Markdown
Let us assume you are a financial consultant and work on a market study that discusses appropriate sales channels for child car seats. You know that your client's strategy is to grow the business. Furthermore you learned that to run sales at break even at a sales location the company needs to sell approximately 4'000 units. What feature drives sales and in which stores would you advise your client to offer their product? Here a decision tree helps very much in visualizing the sales driver. Before we implement the model we prepare the data. Notice that decision trees are invariant to scaling meaning we could, but don't have to scale values. This is true for both quantitative as well as categorical variables. What we need to do though, is transforming categorical values `ShelveLoc, Urban` and `US` into numeric values. We will use pandas `map` method to (a) show an alternative to `pd.factorize` introduced in previous chapters and (b) ensure a mapping that does not confuse (`pd.factorize()` maps a column's first entry to value 0, second to 1 etc. This would mean that `pd.factorize()` would label `Yes` in columns `Urban` or `US` as 0. Yet `Yes` is predominantly represented by a 1. Therefore, with the `map` function we preclude any confusion).
###Code
# Create 'BreakEven' column with 1 if Sales >= 4k, else 0
df['BreakEven'] = df.Sales.map(lambda x: 1 if x>=4 else 0)
# Replace category names with numbers
df.ShelveLoc = df.ShelveLoc.map({'Bad':0, 'Medium': 1, 'Good': 2})
df.Urban = df.Urban.map({'No':0, 'Yes':1})
df.US = df.US.map({'No':0, 'Yes':1})
print(df.head(3))
# Assign features & response to X and y, respectively
X = df.drop(['Sales', 'BreakEven'], axis=1)
y = df.BreakEven
# Train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
Now we are in a position to run the `DecisionTreeClassifier`.
###Code
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(max_depth=4)
tree.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Creating a decision tree is as simple as that. From the above output we see that per default the function will use the Gini index as criterion. The only argument we set is the maximal depth. Alternatively we could define the maximum tree nodes, the minimum number of samples required to split an internal node or the minimum number of samples required to be at a leaf node. There are more options and it is best to check the [documentation page](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.htmlsklearn.tree.DecisionTreeClassifier) to get a thorough understanding of the available options. As with all other Scikit-learn functions, we can again call the usual performance metrics. The interpretation is as discussed in chapter 8.
###Code
# Load metrics sublibrary
from sklearn import metrics
# Print performance metrics
print('True proportion of sales >= 4k: ', y.sum() / y.shape[0])
print('Train score: ', tree.score(X_train, y_train))
print('Test score: ', tree.score(X_test, y_test))
print(37*'-')
# Confusion matrix
y_pred = tree.predict(X_test)
print('Confusion matrix: \n',
metrics.confusion_matrix(y_test, y_pred))
# Manual confusion matrix as pandas DataFrame
confm = pd.DataFrame({'Predicted sales >=4k': y_pred,
'True sales >=4k': y_test})
confm.replace(to_replace={0:'No', 1:'Yes'}, inplace=True)
print(confm.groupby(['True sales >=4k','Predicted sales >=4k']).size().unstack('Predicted sales >=4k'))
###Output
True proportion of sales >= 4k: 0.91
Train score: 0.94375
Test score: 0.8875
-------------------------------------
Confusion matrix:
[[ 0 4]
[ 5 71]]
Predicted sales >=4k No Yes
True sales >=4k
No NaN 4.0
Yes 5.0 71.0
###Markdown
Visualizing Decision TreesUnfortunately, neither `matplotlib` nor `sklearn` have a plotting function integrated to visualize decision trees. However, with the help of additional libraries/packages, Python is able to create a decision tree figure detailing each decision step. To plot these decision trees, we have to rely on third party software. Getting things to run as expected has (at least in my case) proven to be a rather cumbersome experience. To spare you this experience, follow these steps (which at the time of this writing in December 2017 have solved the problem): * Download and install Graphviz from [the web](http://www.graphviz.org/download/). Mac users select the `.pkg` file corresponding to your system version (Snow Leopard, Lion etc.). Windows users select the `.msi` file. Run the execution file and follow the instructions.* Type in `!pip list` into an input field in a Jupyter notebook. This will list all installed Python packages (and its versions). Check if you have a package called `graphviz`. If yes, proceed with next step. If not, type `pip install graphviz` into a new command line. This should install the needed Python package.
###Code
!pip list
###Output
alabaster (0.7.10)
anaconda-client (1.6.5)
anaconda-navigator (1.6.8)
anaconda-project (0.8.0)
asn1crypto (0.22.0)
astroid (1.5.3)
astropy (2.0.2)
babel (2.5.0)
backports.shutil-get-terminal-size (1.0.0)
beautifulsoup4 (4.6.0)
bitarray (0.8.1)
bkcharts (0.2)
blaze (0.11.3)
bleach (2.0.0)
bokeh (0.12.7)
boto (2.48.0)
Bottleneck (1.2.1)
CacheControl (0.12.3)
certifi (2017.7.27.1)
cffi (1.10.0)
chardet (3.0.4)
click (6.7)
cloudpickle (0.4.0)
clyent (1.2.2)
colorama (0.3.9)
comtypes (1.1.2)
conda (4.4.8)
conda-build (3.0.22)
conda-verify (2.0.0)
contextlib2 (0.5.5)
cryptography (2.0.3)
cycler (0.10.0)
Cython (0.26.1)
cytoolz (0.8.2)
dask (0.15.2)
datashape (0.5.4)
decorator (4.1.2)
distlib (0.2.5)
distributed (1.18.3)
docutils (0.14)
entrypoints (0.2.3)
et-xmlfile (1.0.1)
fastcache (1.0.2)
filelock (2.0.12)
Flask (0.12.2)
Flask-Cors (3.0.3)
gevent (1.2.2)
glob2 (0.5)
graphviz (0.8.1)
greenlet (0.4.12)
h5py (2.7.0)
heapdict (1.0.0)
html5lib (0.999999999)
idna (2.6)
imageio (2.2.0)
imagesize (0.7.1)
ipykernel (4.6.1)
ipython (6.1.0)
ipython-genutils (0.2.0)
ipywidgets (7.0.0)
isort (4.2.15)
itsdangerous (0.24)
jdcal (1.3)
jedi (0.10.2)
Jinja2 (2.9.6)
jsonschema (2.6.0)
jupyter-client (5.1.0)
jupyter-console (5.2.0)
jupyter-core (4.3.0)
jupyterlab (0.27.0)
jupyterlab-launcher (0.4.0)
lazy-object-proxy (1.3.1)
llvmlite (0.20.0)
locket (0.2.0)
lockfile (0.12.2)
lxml (3.8.0)
MarkupSafe (1.0)
matplotlib (2.0.2)
mccabe (0.6.1)
menuinst (1.4.8)
mistune (0.7.4)
mpmath (0.19)
msgpack-python (0.4.8)
multipledispatch (0.4.9)
navigator-updater (0.1.0)
nbconvert (5.3.1)
nbformat (4.4.0)
networkx (1.11)
nltk (3.2.4)
nose (1.3.7)
notebook (5.0.0)
numba (0.35.0+10.g143f70e)
numexpr (2.6.2)
numpy (1.13.1)
numpydoc (0.7.0)
odo (0.5.1)
olefile (0.44)
openpyxl (2.4.8)
packaging (16.8)
pandas (0.20.3)
pandas-datareader (0.5.0)
pandocfilters (1.4.2)
partd (0.3.8)
path.py (10.3.1)
pathlib2 (2.3.0)
patsy (0.4.1)
pep8 (1.7.0)
pickleshare (0.7.4)
Pillow (4.2.1)
pip (9.0.1)
pkginfo (1.4.1)
ply (3.10)
progress (1.3)
prompt-toolkit (1.0.15)
psutil (5.2.2)
py (1.4.34)
pycodestyle (2.3.1)
pycosat (0.6.3)
pycparser (2.18)
pycrypto (2.6.1)
pycurl (7.43.0)
pydotplus (2.0.2)
pyflakes (1.5.0)
Pygments (2.2.0)
pylint (1.7.2)
pyodbc (4.0.17)
pyOpenSSL (17.2.0)
pyparsing (2.2.0)
PySocks (1.6.7)
pytest (3.2.1)
python-dateutil (2.6.1)
pytz (2017.2)
PyWavelets (0.5.2)
pywin32 (221)
PyYAML (3.12)
pyzmq (16.0.2)
QtAwesome (0.4.4)
qtconsole (4.3.1)
QtPy (1.3.1)
requests (2.18.4)
requests-file (1.4.1)
requests-ftp (0.3.1)
rope (0.10.5)
ruamel-yaml (0.11.14)
scikit-image (0.13.0)
scikit-learn (0.19.0)
scipy (0.19.1)
seaborn (0.8)
setuptools (36.5.0.post20170921)
simplegeneric (0.8.1)
singledispatch (3.4.0.3)
six (1.10.0)
snowballstemmer (1.2.1)
sortedcollections (0.5.3)
sortedcontainers (1.5.7)
Sphinx (1.6.3)
sphinxcontrib-websupport (1.0.1)
spyder (3.2.3)
SQLAlchemy (1.1.13)
statsmodels (0.8.0)
sympy (1.1.1)
tables (3.4.2)
tblib (1.3.2)
testpath (0.3.1)
toolz (0.8.2)
tornado (4.5.2)
traitlets (4.3.2)
typing (3.6.2)
unicodecsv (0.14.1)
urllib3 (1.22)
wcwidth (0.1.7)
webencodings (0.5.1)
Werkzeug (0.12.2)
wheel (0.29.0)
widgetsnbextension (3.0.2)
win-inet-pton (1.0.1)
win-unicode-console (0.5)
wincertstore (0.2)
wrapt (1.10.11)
xlrd (1.1.0)
XlsxWriter (0.9.8)
xlwings (0.11.4)
xlwt (1.3.0)
zict (0.1.2)
###Markdown
* Finally, we need to make sure that Graphviz' `.exe` file was added to your system `PATH`. [This is an environmental variable that tells the computer which directories to search for executable files (i.e., ready-to-run programs) in response to commands issued by a user](http://www.linfo.org/path_env_var.html). How do you know if Graphviz is on your `PATH`? Run below code to find out. (Alternatively you might open a shell window and on Windows call `$env:path.split(";")` or on a Mac `echo $PATH$`).
###Code
# On a Windows PC (--> remove # to have output)
%echo %PATH%'
# On a Mac
#%echo $PATH
###Output
C:\Program Files\ImageMagick-7.0.5-Q16;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\Program Files\MiKTeX 2.9\miktex\bin\x64\;C:\Program Files\Git\cmd;C:\Anaconda;C:\Anaconda\Library\mingw-w64\bin;C:\Anaconda\Library\usr\bin;C:\Anaconda\Library\bin;C:\Anaconda\Scripts;C:\Users\Ben Zimmermann\AppData\Local\Microsoft\WindowsApps;C:\Users\Ben Zimmermann\AppData\Local\GitHubDesktop\bin;C:\Program Files (x86)\Graphviz2.38\bin;;%USERPROFILE%\AppData\Local\Microsoft\WindowsApps'
###Markdown
* You see that in my case, Graphviz2.38 is the last entry on the list. In case it is missing on your machine, add graphviz to the `PATH`, by following the steps described [in this discussion on StackOverflow](https://stackoverflow.com/questions/18438997/why-is-pydot-unable-to-find-graphvizs-executables-in-windows-8). But be careful: This might cause some problems in the background with the Spyder IDE ([see discussion here](https://github.com/ContinuumIO/anaconda-issues/issues/1666)). Having sorted this Graphviz installation out, we are now able to proceed with plotting the decision tree. Of the function's available input arguments (as in below's code snippet), `filled=True, rounded=True, class_names=['Loss', 'BrEven']` and `feature_names=X.columns.values` are optional but help in making the image visually more appealing by adding color, rounded box edges, showing the class labels counts at each node, and displaying the feature name according to the majority vote at the respective node. The plot below helps understand this.
###Code
import graphviz
from sklearn.tree import export_graphviz
# Create decision tree object
dot_data = export_graphviz(tree, filled=True, rounded=True,
class_names=['Loss', 'BrEven'],
feature_names=X.columns.values,
out_file=None)
# Visualize/Plot graph
graph = graphviz.Source(dot_data)
graph
###Output
_____no_output_____
###Markdown
The above decision tree plot helps us understand how the algorithm decides on a class. The coloring indicates the label. What we learn is that the driving factor for the given training set is the shelve location and after that the product's selling price. If shelve location is 0 (bad) we end up in the left side of the tree. Is the shelve location medium or good we run through the right hand side. `Gini` represents the resulting Gini index at that node, `samples` is the number of observations that end up in the respective node, `values` indicates the true label, e.g. 2nd node from the bottom-left `[4, 40]` means of the 44 samples 4 are of label `Loss` and 40 belong to `BrEven`. Following the majority vote, the decision tree algorithm labels all 44 samples in this node to class 'BrEven'. If you wish to save the decision tree to a `.pdf` or `.png` file, make sure to have the package `pydotplus` installed. You can use the same code snippet from above: * With `!pip list` check if the package is already installed* If not, use `!pip install pydotplus` to install the packageThen you are ready to save the output.
###Code
from pydotplus import graph_from_dot_data
# Convert graph to dot-file
graph = graph_from_dot_data(dot_data)
# Here I safe 'graph' to the 'Graphics' folder as png
# If pdf is prefered, replace both '...png' with '...pdf'
graph.write_png('Graphics/0210_DT_Carseats.png');
###Output
_____no_output_____
###Markdown
Decision Trees and Cross ValidationWe have mentioned above that one of the major disadvantages of decision trees is the risk of overfitting. Having discussed cross validation in the previous chapter, we ought to ask the question how cross validation can be used to prevent overfitting. The answer to this seemingly trivial question might be more complex than initially thought. If we apply a $k$-fold cross validation and generate $k$ decision tree for every fold in the training set, then these $k$ tree will most certainly look different from fold to fold. Each of the $k$ decision trees will still suffer from overfitting. Furthermore, CV does not tell us anything about which of the $k$ tree we are supposed to select for prediction, and ultimately, this ought to be our goal. If CV produces 10 different trees but doesn't tell us which one of these to choose, we have gained nothing. Choosing the one tree (among the $k$) with (for example) the lowest error rate will not do it as this approach is most probably flawed: such a model is based on even less information than what would be available through the full training set and in general models with more information beat those with less. So CV will not help us on this end. However, CV is still of use. To recapitulate the idea behind CV: Fundamentally, the purpose of cross validation is not to help select a particular instance of the $k$ decision trees but rather to qualify the model, i.e. to provide metrics such as error rate etc. which in turn can be useful in asserting the level of precision one can expect from the model. Therefore, CV comes into play when we are tuning the model to find the optimal hyperparameter. As an example: usually we do not know the optimal tree form. Does it generalize best when it has depth 2, 5, 10? Or is our stopping criterion to have no more than 5, 10, 20 observation per region $R_j$? Here we can run different parameter in combination with CV and this, thus, will provide an answer on how well the model will generalize to new data - given the set of hyperparameter. Cross ValidationBelow we show two setups to do this: with loops or grid search. The first approach follows what we learned in the previous chapter on cross validation.
###Code
# Max depth
maxDepth = np.array([1, 2, 5, 10])
# Minimum number of samples required to split any internal node
minSamplesNode = np.array([2, 5, 10, 20])
# The minimum number of samples required to be at a leaf/terminal node
minSamplesLeaf = np.array([2, 5, 10, 20])
# Import necessary functions
from sklearn.model_selection import StratifiedKFold, cross_val_score
# Create k-Fold CV object
kFold = StratifiedKFold(n_splits=10, random_state=10)
# Loop through maxDept values, run CV and print results
for i in maxDepth:
tree = DecisionTreeClassifier(max_depth=i, random_state=0)
scrs = cross_val_score(tree, X_train, y_train, cv=kFold)
print('Score (depth ={0: 3.0f}): {1: .3f} +/- {2: .3f}'.format(i, np.mean(scrs), np.std(scrs)))
print(50*'-')
# Loop through minSamplesNode values, run CV and print results
for i in minSamplesNode:
tree = DecisionTreeClassifier(min_samples_split=i, random_state=0)
scrs = cross_val_score(tree, X_train, y_train, cv=kFold)
print('Score (min sample at node ={0: 3.0f}): {1: .3f} +/- {2: .3f}'.format(i, np.mean(scrs), np.std(scrs)))
print(50*'-')
# Loop through minSamplesNode values, run CV and print results
for i in minSamplesLeaf:
tree = DecisionTreeClassifier(min_samples_leaf=i, random_state=0)
scrs = cross_val_score(tree, X_train, y_train, cv=kFold)
print('Score (min sample at leaf ={0: 3.0f}): {1: .3f} +/- {2: .3f}'.format(i, np.mean(scrs), np.std(scrs)))
###Output
Score (depth = 1): 0.900 +/- 0.011
Score (depth = 2): 0.866 +/- 0.050
Score (depth = 5): 0.857 +/- 0.043
Score (depth = 10): 0.851 +/- 0.070
--------------------------------------------------
Score (min sample at node = 2): 0.851 +/- 0.070
Score (min sample at node = 5): 0.851 +/- 0.080
Score (min sample at node = 10): 0.863 +/- 0.065
Score (min sample at node = 20): 0.850 +/- 0.047
--------------------------------------------------
Score (min sample at leaf = 2): 0.854 +/- 0.080
Score (min sample at leaf = 5): 0.866 +/- 0.050
Score (min sample at leaf = 10): 0.869 +/- 0.047
Score (min sample at leaf = 20): 0.869 +/- 0.056
###Markdown
Based on the output we can conclude that we get better scores with fewer nodes. As for the min sample at a node/leaf, the differences are too small to judge. However, what we can also conclude is that this is a fairly cumbersome process. Three separate loops to find the optimal hyperparameter. Furthermore, these three loops just check for one criterion, but what if we were interested in **all possible combinations**? Maybe we get better results if we combine them - we only know if we check. And, as you should be expecting by now, there's a convenient way of doing this: via grid search. Grid SearchThe approach of grid search is fairly simple: it's a brute-force search paradigm where we specify a list of values for different hyperparameters. The algorithm evaluates the model performance for each combination of hyperparameter to obtain the optimal combination of values from this set (Raschka (2015)). As usual we use a code example to show how this works.
###Code
from sklearn.model_selection import GridSearchCV
# Define the hyperparameter values to be tested
param_grid = {'criterion': ['gini', 'entropy'],
'max_depth': maxDepth,
'min_samples_split': minSamplesNode,
'min_samples_leaf': minSamplesLeaf}
# Run brute-force grid search
gs = GridSearchCV(estimator=DecisionTreeClassifier(random_state=0),
param_grid=param_grid,
scoring='accuracy',
cv=kFold, n_jobs=-1)
gs = gs.fit(X_train, y_train)
print(gs.best_score_)
print(gs.best_params_)
###Output
0.9
{'criterion': 'gini', 'max_depth': 1, 'min_samples_leaf': 2, 'min_samples_split': 2}
###Markdown
Using the preciding code, we train and tune a `DecisionTreeClassifier` on the given parameter grid. For this we define a dictionary called `param_grid` and apply this to the `GridSearchCV`. Using the training data we obtain the score of the best-performing model via the `best_score_` attribute (here based on the accuracy measure) and the corresponding parameter via the `best_params_`. As it turns out, we are unable to increase the accuracy score by combining stopping criterions. The best result is indeed when we use a max. depth of 1. If a combination of the three criterions (or the entropy criterion) were to yield better results, it would mean that `min_samples_leaf` and/or `min_samples_split` would have values $\geq$ than the minimum value of 2. It is important to note that this result should not deceive you to believe that overfitting with decision trees is a myth. Here we seem to come across a rare exception where a prune tree of depth 1 yields the best performance. In general, decision trees have much better training results on deeper grown trees. Hence the risk of overfitting.> Note that grid search might be a convenient and powerful way of tuning hyperparameter but because it is a brute-force approach it is computationally very expensive. Depending on the number of processors you run your script on and the task at hand this might take substantial time. If, for whatever reasons, this is not feasible, the `RandomizedSearchCV` class might be a feasible alternative. This class draws random parameter from sampling distributions with a specified budget. See [the documentation for more details](http://scikit-learn.org/stable/modules/grid_search.htmlrandomized-parameter-optimization). Finally, to estimate the performance of these parameter on the independent test dataset, we can run these three lines:
###Code
# Extract best parameter
clf = gs.best_estimator_
# Fit model given best parameter
clf.fit(X_train, y_train)
# Print out score on Test dataset
print('Test accuracy: {0: .4f}'.format(clf.score(X_test, y_test)))
###Output
Test accuracy: 0.9500
###Markdown
Random Forest Turning Weaknesses into StrengthsCV is so valuable because it provides reliable information on how well a model generalizes. Recall that given a set of $n$ independent observations $Z_1, Z_2, \ldots, Z_n$, each with variance $\sigma^2$, the variance of the mean $\bar{Z}$ of the observations is given by $\sigma^2/n$ (see appendix of the script). This shows that if we average a set of (independent) performance metrics (e.g. classification error), we actually reduce the variance of this error. And in doing so, we increase the validity of said metric. If we now extend this idea to not only assessing performance metrics but also predicting outcomes, we enter the field of **ensemble models**. These models produce $k$ independent predictions (e.g. trees) and assign the class label based on a majority vote of the $k$ outcomes. In doing so we lower the variance without compromising on the low bias of decision trees. Therefore it is easy to see that ensemble methods have proven to be extremely valuable, especially in the field of decision trees. The most prominent ensemble algorithm with respect to decision trees is called 'Random Forest'.Let us first define the steps of a random forest model (Raschka (2015, p. 90)):1. Draw a random bootstrap sample of size $n$ (randomly choose $n$ samples from the training set with replacement)2. Grow a decision tree from the bootstrap sample. At each node: 1. Randomly select $m$ features without replacement (with $m < p$) 2. Split the node using the feature that provides the best split according to the objective function (e.g. by maximizing the information gain)3. Repeat steps 1. and 2. $B$ times 4. Aggregate the $B$ predictions (of each tree) and assign the class label by majority voteStep two above might seem odd at first: Why would we restrict the model to only choose from a subset $m$ of features (instead of selecting from the complete set $p$)? This is best explained with James et al. (2013, p. 320):> "Suppose that there is one very strong predictor in the data set, along with a number of other moderately strong predictors. Then in the collection of bagged trees, most or all of the trees will use this strong predictor in the top split. Consequently, all of the bagged trees will look quite similar to each other. Hence the predictions from the bagged trees will be highly correlated. Unfortunately, averaging many highly correlated quantities does not lead to as large of a reduction in variance as averaging many uncorrelated quantities. [...] Random forests overcome this problem by forcing each split to consider only a subset of the predictors. Therefore, on average $(p - m)/p$ of the splits will not even consider the strong predictor, and so other predictors will have more of a chance. We can think of this process as *decorrelating* the trees, thereby making the average of the resulting trees less variable and hence more reliable."On a side note: There exists a predecessor algorithm that works similar to random forests except that it sets $m=p$ in step 2.1 per default. Python has it implement as `BaggingClassifier()`. Since random forests improves on the problem of correlated features, it is today clearly the preferred approach. For this reason we will only discuss the random forest implementation. Selecting Random Forest's HyperparameterThough the interpretability of a random forest does not meet the simplicity of a simple decision tree, a big advantage is that we do not have to worry that much about choosing good hyperparameter values. We have three primary values to set: * The size $n$ of the bootstrap (step 1)* The subset $m$ of possible features (step 2)* The number of iterations $B$ (step 3)Typically, the larger number of trees $B$, the better the performance of our random forest classifier. But this of course comes at the expense of (potentially significant) increased computational costs. Additionally, the marginal improvement decreases as the number of trees is increased, i.e. at a certain point the cost in computation time will outgrow the benefit in prediction accuracy from more trees. In the Scikit-learn implementation, this hyperparameter is steered through the `n_estimators` argument.The feature subset size ($m$) to consider at each node is typically set to $m = \sqrt{p}$, that is, the number of predictors considered at each split is approximately equal to the square root of the total number of predictors $p$. Scikit-learn uses the `max_feature` argument to control for it. Finally, via the size $n$ of the bootstrap we control the bias-variance trade-off. A large value for $n$ will decrease randomness and thus such a model is more likely to overfit. On the other hand, preventing overfitting by selecting smaller values come at the expense of the model predictive performance. And since predictive accuracy is what we are most interested in, the vast majority of random forest implementations, including the `RandomForestClassifier` implementation in Scikit-learn, have set the bootstrap sample size $n$ per default to the number of samples in the original training set. This provides a good bias-variance trade-off (Raschka (2015)). Random Forest in Scikit-LearnThere is an easily accessible implementation of a random forest classifier in Scikit-learn that we can use. For [a description of the available hyperparameter please check again the function's documentation](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html). It is also left to the reader to investigate how CV and/or grid-search can improve the performance. For this the same steps as explained for the `DecisionTreeClasifier` function can be applied.
###Code
from sklearn.ensemble import RandomForestClassifier
# Create classifier object and fit it to data
forest = RandomForestClassifier(criterion='gini', random_state=0, n_jobs=-1)
forest.fit(X_train, y_train)
# Print test score
print('Test accuracy: {0: .4f}'.format(clf.score(X_test, y_test)))
###Output
Test accuracy: 0.9500
|
01_basic_python.ipynb | ###Markdown
_Bienvenido a la versión de jupyter notebooks de Google_ Basic python Variables
###Code
a = 10
a
b = a * 10
print(b)
c = d
a = "Hola, Mundo"
print(a)
d = "1981"
c = int(d)
c
d
print(a, b, c, d)
###Output
_____no_output_____
###Markdown
**Ctrl + enter** Automáticamente ejecuta una celda**SHift Enter** Ejecuta una celda y pasa a la siguiente. Si no existe, la crea_Ésto es Markdown!_ Estructuras condicionales
###Code
if b > c:
print("B es mayor")
print(":)")
else:
print("C es mayor o igual")
import random
a = random.randint(100, 1000)
if a < 150:
print("Caso 1")
elif a < 350:
print("Caso 2")
elif a < 650:
print("Caso 3")
else:
print("Ningun caso")
###Output
Ningun caso
###Markdown
Estructuras repetitivas
###Code
for i in range(10):
print(i)
for i in range(5, 10):
print(i)
for i in range(5, 1000, 200):
print(i)
for i in range(10, 0, -1):
print(i)
while True:
r = random.randint(0, 10)
if r % 2 == 0:
print("Soy Par", r)
elif r % 5:
break
print("finalizado")
while True:
r = random.randint(0, 10)
print(r)
if r % 2 == 0:
continue
print("No fue par")
if r % 5 == 0:
break
print("finalizado")
###Output
6
4
1
No fue par
7
No fue par
5
No fue par
finalizado
###Markdown
Funciones
###Code
def func1():
print("Hola")
func1()
def func2(nombre):
print("hola", nombre)
func2("Luis")
def func3(a, b):
return a + b
print(func3(6, 8))
print(func3("Hola", "Mundo"))
###Output
14
HolaMundo
###Markdown
Closure o captura de contexto
###Code
def func4():
func4.x = 10
def func41():
func4.x += 1
return func4.x
return func41
f = func4()
print(f())
print(f())
print(f())
print(f())
def func5(lado):
return lado*lado, lado*4
area, perim = func5(10)
print(area, perim)
###Output
100 40
###Markdown
Estructuras de datos Listas (que en realidad son vectores)
###Code
a = [1, 2, 3, 4, 5]
a
b = [0]*10
b
###Output
_____no_output_____
###Markdown
Comprehensions
###Code
c = [i for i in range(10)]
c
for elemento in c:
print(elemento)
d = [random.randint(10, 20) for _ in range(10)]
d
for elem in d:
print(elem)
for i, elem in enumerate(d):
print(i, elem)
e = "Hola Mundo"
for char in e:
print(char)
matriz = [[1, 2], [3, 4], [5, 6]]
matriz
for row in matriz:
print(row)
for x, y in matriz:
print(x, y)
###Output
1 2
3 4
5 6
###Markdown
Diccionarios
###Code
a = {}
a['carlos'] = [15, 16, 17]
a['rosa'] = [13, 16, 18]
a['rene'] = [14, 16, 16]
a
###Output
_____no_output_____
###Markdown
Basic Python for numerical science **List vs Tuple vs Dict**|-----------|------|-------|------||| List | Tuple | Dict ||-----------|------|-------|------||Read|index|index|key||Mutable|yes|no|yes|
###Code
# Try change List
try:
a = [0,1,2,3] # List
a[0] = 1
print("List changed to",a)
except:
print("Can't change List")
# Try change Tuple
try:
b = (0,1,2,3) # Tuple
b[0] = 1
print("Tuple changed to",b)
except:
print("Can't change Tuple")
# Try change Dict by index
try:
c= {'a':'0','b':'1','c':'2','d':'3'}
cc = c.copy()
cc[0] = 1
print("Dict changed from",c,"to",cc)
except:
print("Can't change Dict by index")
# Try change Dict by key
try:
c = {'a':'0','b':'1','c':'2','d':'3'}
cc = c.copy()
cc['a'] = 1
print("Dict changed from",c,"to",cc)
except:
print("Can't change Dict by key")
###Output
List changed to [1, 1, 2, 3]
Can't change Tuple
Dict changed from {'a': '0', 'b': '1', 'c': '2', 'd': '3'} to {'a': '0', 'b': '1', 'c': '2', 'd': '3', 0: 1}
Dict changed from {'a': '0', 'b': '1', 'c': '2', 'd': '3'} to {'a': 1, 'b': '1', 'c': '2', 'd': '3'}
###Markdown
At this point use will noticed that we need to use`Dict.copy()` while `List` and `Tuple` don'tLet's check it out why???
###Code
after = before = {'a':'original'}
after['a'] = 'changed'
print(after,before)
assert after!=before, "They're same!!!?????"
before = {'a':'original'}
after = before.copy()
after['a'] = 'changed'
print(after,before)
assert after!=before, "They are same!!"
###Output
{'a': 'changed'} {'a': 'original'}
###Markdown
Lambda : The magic anonymously defined function ???We don't have to use `def` to define function everytime. :)
###Code
increase1 = lambda x: x+1
###Output
_____no_output_____
###Markdown
This will create a function named `incease1` which take 1 parameters `x` then return `x+1`
###Code
increase1(3)
sum_then_minus = lambda x,y,z: x+y-z
sum_then_minus(3,2,1) # 3+2-1
split_then_join_with_period = lambda text: ".".join(text.split(" "))
split_then_join_with_period("Hello the amazing AI Builders")
import random
random_weight = lambda : random.random() # This lambda don't take parameters
random_weight()
###Output
_____no_output_____ |
notebooks/teste-docker.ipynb | ###Markdown
Import Libs
###Code
import pandas as pd
import spacy
import nltk
nltk.download('stopwords')
from api_model import nlextract
pd.set_option('display.max_colwidth', -1)
pd.set_option('display.max_rows', None)
extractor = nlextract.NLExtractor()
###Output
[nltk_data] Downloading package stopwords to /opt/bitnami/jupyterhub-
[nltk_data] singleuser/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
###Markdown
Read Excel
###Code
caminho = '/opt/dna/find-keywords/datalake/Pedidos - Finalizados.xlsx'
df = pd.read_excel(caminho, engine='openpyxl')
df.info()
df.head()
df = df.sample(n=5000)
df['Conversas'].count()
coluna_texto = 'Conversas'
df[f'{coluna_texto}_clean'] = df[coluna_texto].apply(extractor.udf_clean_text) ## limpa toda a acentuação, caracteres especiais,
# pontuações do texto.
df[[f'{coluna_texto}_clean', 'Conversas']].head(4)
## 4 digitos
patters = r'\d{4}\s|\d{4}|(?:\.|,|[0-9]{4})*|\d{4}\s'
## 10 digitos
#patters = r'\d{10}\s'
## extrair 4 ou 10 digitos
df['extract_digits'] = df[coluna_texto].apply(lambda x: extractor.udf_extract_digits(x, patters))
df[['extract_digits', coluna_texto]].head(3)
list_numbers = ['2022']
df['extract_digits_remove'] = df['extract_digits'].apply(lambda x: extractor.remove_specific_numbers(x, list_numbers))
df[['extract_digits_remove','extract_digits', coluna_texto]].head(15)
df['len'] = df['extract_digits_remove'].str.len()
df[['len','extract_digits_remove','extract_digits']].head(3)
###Output
_____no_output_____ |
Machine Learning/Clustering and Retrieval/Week 3/2_kmeans-with-text-data_blank.ipynb | ###Markdown
k-means with text data In this assignment you will* Cluster Wikipedia documents using k-means* Explore the role of random initialization on the quality of the clustering* Explore how results differ after changing the number of clusters* Evaluate clustering, both quantitatively and qualitativelyWhen properly executed, clustering uncovers valuable insights from a set of unlabeled documents. **Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook. Import necessary packages The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read [this page](https://turi.com/download/upgrade-graphlab-create.html).
###Code
import graphlab
import matplotlib.pyplot as plt
import numpy as np
import sys
import os
from scipy.sparse import csr_matrix
%matplotlib inline
'''Check GraphLab Create version'''
from distutils.version import StrictVersion
assert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'
###Output
This non-commercial license of GraphLab Create for academic use is assigned to [email protected] and will expire on August 21, 2017.
###Markdown
Load data, extract features To work with text data, we must first convert the documents into numerical features. As in the first assignment, let's extract TF-IDF features for each article.
###Code
wiki = graphlab.SFrame('people_wiki.gl/')
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])
###Output
_____no_output_____
###Markdown
For the remainder of the assignment, we will use sparse matrices. Sparse matrices are matrices that have a small number of nonzero entries. A good data structure for sparse matrices would only store the nonzero entries to save space and speed up computation. SciPy provides a highly-optimized library for sparse matrices. Many matrix operations available for NumPy arrays are also available for SciPy sparse matrices.We first convert the TF-IDF column (in dictionary format) into the SciPy sparse matrix format. We included plenty of comments for the curious; if you'd like, you may skip the next block and treat the function as a black box.
###Code
def sframe_to_scipy(x, column_name):
'''
Convert a dictionary column of an SFrame into a sparse matrix format where
each (row_id, column_id, value) triple corresponds to the value of
x[row_id][column_id], where column_id is a key in the dictionary.
Example
>>> sparse_matrix, map_key_to_index = sframe_to_scipy(sframe, column_name)
'''
assert x[column_name].dtype() == dict, \
'The chosen column must be dict type, representing sparse data.'
# Create triples of (row_id, feature_id, count).
# 1. Add a row number.
x = x.add_row_number()
# 2. Stack will transform x to have a row for each unique (row, key) pair.
x = x.stack(column_name, ['feature', 'value'])
# Map words into integers using a OneHotEncoder feature transformation.
f = graphlab.feature_engineering.OneHotEncoder(features=['feature'])
# 1. Fit the transformer using the above data.
f.fit(x)
# 2. The transform takes 'feature' column and adds a new column 'feature_encoding'.
x = f.transform(x)
# 3. Get the feature mapping.
mapping = f['feature_encoding']
# 4. Get the feature id to use for each key.
x['feature_id'] = x['encoded_features'].dict_keys().apply(lambda x: x[0])
# Create numpy arrays that contain the data for the sparse matrix.
i = np.array(x['id'])
j = np.array(x['feature_id'])
v = np.array(x['value'])
width = x['id'].max() + 1
height = x['feature_id'].max() + 1
# Create a sparse matrix.
mat = csr_matrix((v, (i, j)), shape=(width, height))
return mat, mapping
# The conversion will take about a minute or two.
tf_idf, map_index_to_word = sframe_to_scipy(wiki, 'tf_idf')
tf_idf
###Output
_____no_output_____
###Markdown
The above matrix contains a TF-IDF score for each of the 59071 pages in the data set and each of the 547979 unique words. Normalize all vectors As discussed in the previous assignment, Euclidean distance can be a poor metric of similarity between documents, as it unfairly penalizes long articles. For a reasonable assessment of similarity, we should disregard the length information and use length-agnostic metrics, such as cosine distance.The k-means algorithm does not directly work with cosine distance, so we take an alternative route to remove length information: we normalize all vectors to be unit length. It turns out that Euclidean distance closely mimics cosine distance when all vectors are unit length. In particular, the squared Euclidean distance between any two vectors of length one is directly proportional to their cosine distance.We can prove this as follows. Let $\mathbf{x}$ and $\mathbf{y}$ be normalized vectors, i.e. unit vectors, so that $\|\mathbf{x}\|=\|\mathbf{y}\|=1$. Write the squared Euclidean distance as the dot product of $(\mathbf{x} - \mathbf{y})$ to itself:\begin{align*}\|\mathbf{x} - \mathbf{y}\|^2 &= (\mathbf{x} - \mathbf{y})^T(\mathbf{x} - \mathbf{y})\\ &= (\mathbf{x}^T \mathbf{x}) - 2(\mathbf{x}^T \mathbf{y}) + (\mathbf{y}^T \mathbf{y})\\ &= \|\mathbf{x}\|^2 - 2(\mathbf{x}^T \mathbf{y}) + \|\mathbf{y}\|^2\\ &= 2 - 2(\mathbf{x}^T \mathbf{y})\\ &= 2(1 - (\mathbf{x}^T \mathbf{y}))\\ &= 2\left(1 - \frac{\mathbf{x}^T \mathbf{y}}{\|\mathbf{x}\|\|\mathbf{y}\|}\right)\\ &= 2\left[\text{cosine distance}\right]\end{align*}This tells us that two **unit vectors** that are close in Euclidean distance are also close in cosine distance. Thus, the k-means algorithm (which naturally uses Euclidean distances) on normalized vectors will produce the same results as clustering using cosine distance as a distance metric.We import the [`normalize()` function](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html) from scikit-learn to normalize all vectors to unit length.
###Code
from sklearn.preprocessing import normalize
tf_idf = normalize(tf_idf)
###Output
_____no_output_____
###Markdown
Implement k-means Let us implement the k-means algorithm. First, we choose an initial set of centroids. A common practice is to choose randomly from the data points.**Note:** We specify a seed here, so that everyone gets the same answer. In practice, we highly recommend to use different seeds every time (for instance, by using the current timestamp).
###Code
def get_initial_centroids(data, k, seed=None):
'''Randomly choose k data points as initial centroids'''
if seed is not None: # useful for obtaining consistent results
np.random.seed(seed)
n = data.shape[0] # number of data points
# Pick K indices from range [0, N).
rand_indices = np.random.randint(0, n, k)
# Keep centroids as dense format, as many entries will be nonzero due to averaging.
# As long as at least one document in a cluster contains a word,
# it will carry a nonzero weight in the TF-IDF vector of the centroid.
centroids = data[rand_indices,:].toarray()
return centroids
###Output
_____no_output_____
###Markdown
After initialization, the k-means algorithm iterates between the following two steps:1. Assign each data point to the closest centroid.$$z_i \gets \mathrm{argmin}_j \|\mu_j - \mathbf{x}_i\|^2$$2. Revise centroids as the mean of the assigned data points.$$\mu_j \gets \frac{1}{n_j}\sum_{i:z_i=j} \mathbf{x}_i$$ In pseudocode, we iteratively do the following:```cluster_assignment = assign_clusters(data, centroids)centroids = revise_centroids(data, k, cluster_assignment)``` Assigning clusters How do we implement Step 1 of the main k-means loop above? First import `pairwise_distances` function from scikit-learn, which calculates Euclidean distances between rows of given arrays. See [this documentation](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.pairwise_distances.html) for more information.For the sake of demonstration, let's look at documents 100 through 102 as query documents and compute the distances between each of these documents and every other document in the corpus. In the k-means algorithm, we will have to compute pairwise distances between the set of centroids and the set of documents.
###Code
from sklearn.metrics import pairwise_distances
# Get the TF-IDF vectors for documents 100 through 102.
queries = tf_idf[100:102,:]
# Compute pairwise distances from every data point to each query vector.
dist = pairwise_distances(tf_idf, queries, metric='euclidean')
print dist
###Output
[[ 1.41000789 1.36894636]
[ 1.40935215 1.41023886]
[ 1.39855967 1.40890299]
...,
[ 1.41108296 1.39123646]
[ 1.41022804 1.31468652]
[ 1.39899784 1.41072448]]
###Markdown
More formally, `dist[i,j]` is assigned the distance between the `i`th row of `X` (i.e., `X[i,:]`) and the `j`th row of `Y` (i.e., `Y[j,:]`). **Checkpoint:** For a moment, suppose that we initialize three centroids with the first 3 rows of `tf_idf`. Write code to compute distances from each of the centroids to all data points in `tf_idf`. Then find the distance between row 430 of `tf_idf` and the second centroid and save it to `dist`.
###Code
# Students should write code here
centroid=tf_idf[0:3]
distances=pairwise_distances(tf_idf,centroid,metric='euclidean')
dist=distances[430][1]
print dist
'''Test cell'''
if np.allclose(dist, pairwise_distances(tf_idf[430,:], tf_idf[1,:])):
print('Pass')
else:
print('Check your code again')
###Output
Pass
###Markdown
**Checkpoint:** Next, given the pairwise distances, we take the minimum of the distances for each data point. Fittingly, NumPy provides an `argmin` function. See [this documentation](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.argmin.html) for details.Read the documentation and write code to produce a 1D array whose i-th entry indicates the centroid that is the closest to the i-th data point. Use the list of distances from the previous checkpoint and save them as `distances`. The value 0 indicates closeness to the first centroid, 1 indicates closeness to the second centroid, and so forth. Save this array as `closest_cluster`.**Hint:** the resulting array should be as long as the number of data points.
###Code
# Students should write code here
closest_cluster=np.argmin(distances,axis=1)
print closest_cluster
'''Test cell'''
reference = [list(row).index(min(row)) for row in distances]
if np.allclose(closest_cluster, reference):
print('Pass')
else:
print('Check your code again')
###Output
Pass
###Markdown
**Checkpoint:** Let's put these steps together. First, initialize three centroids with the first 3 rows of `tf_idf`. Then, compute distances from each of the centroids to all data points in `tf_idf`. Finally, use these distance calculations to compute cluster assignments and assign them to `cluster_assignment`.
###Code
# Students should write code here
centroid=tf_idf[0:3]
distances=pairwise_distances(tf_idf,centroid,metric='euclidean')
cluster_assignment=np.argmin(distances,axis=1)
if len(cluster_assignment)==59071 and \
np.array_equal(np.bincount(cluster_assignment), np.array([23061, 10086, 25924])):
print('Pass') # count number of data points for each cluster
else:
print('Check your code again.')
###Output
Pass
###Markdown
Now we are ready to fill in the blanks in this function:
###Code
def assign_clusters(data, centroids):
# Compute distances between each data point and the set of centroids:
# Fill in the blank (RHS only)
distances_from_centroids = pairwise_distances(data,centroids,metric='euclidean')
# Compute cluster assignments for each data point:
# Fill in the blank (RHS only)
cluster_assignment = np.argmin(distances_from_centroids,axis=1)
return cluster_assignment
###Output
_____no_output_____
###Markdown
**Checkpoint**. For the last time, let us check if Step 1 was implemented correctly. With rows 0, 2, 4, and 6 of `tf_idf` as an initial set of centroids, we assign cluster labels to rows 0, 10, 20, ..., and 90 of `tf_idf`. The resulting cluster labels should be `[0, 1, 1, 0, 0, 2, 0, 2, 2, 1]`.
###Code
if np.allclose(assign_clusters(tf_idf[0:100:10], tf_idf[0:8:2]), np.array([0, 1, 1, 0, 0, 2, 0, 2, 2, 1])):
print('Pass')
else:
print('Check your code again.')
###Output
Pass
###Markdown
Revising clusters Let's turn to Step 2, where we compute the new centroids given the cluster assignments. SciPy and NumPy arrays allow for filtering via Boolean masks. For instance, we filter all data points that are assigned to cluster 0 by writing```data[cluster_assignment==0,:]``` To develop intuition about filtering, let's look at a toy example consisting of 3 data points and 2 clusters.
###Code
data = np.array([[1., 2., 0.],
[0., 0., 0.],
[2., 2., 0.]])
centroids = np.array([[0.5, 0.5, 0.],
[0., -0.5, 0.]])
###Output
_____no_output_____
###Markdown
Let's assign these data points to the closest centroid.
###Code
cluster_assignment = assign_clusters(data, centroids)
print cluster_assignment
###Output
[0 1 0]
###Markdown
The expression `cluster_assignment==1` gives a list of Booleans that says whether each data point is assigned to cluster 1 or not:
###Code
cluster_assignment==1
###Output
_____no_output_____
###Markdown
Likewise for cluster 0:
###Code
cluster_assignment==0
###Output
_____no_output_____
###Markdown
In lieu of indices, we can put in the list of Booleans to pick and choose rows. Only the rows that correspond to a `True` entry will be retained.First, let's look at the data points (i.e., their values) assigned to cluster 1:
###Code
data[cluster_assignment==1]
###Output
_____no_output_____
###Markdown
This makes sense since [0 0 0] is closer to [0 -0.5 0] than to [0.5 0.5 0].Now let's look at the data points assigned to cluster 0:
###Code
data[cluster_assignment==0]
###Output
_____no_output_____
###Markdown
Again, this makes sense since these values are each closer to [0.5 0.5 0] than to [0 -0.5 0].Given all the data points in a cluster, it only remains to compute the mean. Use [np.mean()](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.mean.html). By default, the function averages all elements in a 2D array. To compute row-wise or column-wise means, add the `axis` argument. See the linked documentation for details. Use this function to average the data points in cluster 0:
###Code
data[cluster_assignment==0].mean(axis=0)
###Output
_____no_output_____
###Markdown
We are now ready to complete this function:
###Code
def revise_centroids(data, k, cluster_assignment):
new_centroids = []
for i in xrange(k):
# Select all data points that belong to cluster i. Fill in the blank (RHS only)
member_data_points = data[cluster_assignment==i]
# Compute the mean of the data points. Fill in the blank (RHS only)
centroid = member_data_points.mean(axis=0)
# Convert numpy.matrix type to numpy.ndarray type
centroid = centroid.A1
new_centroids.append(centroid)
new_centroids = np.array(new_centroids)
return new_centroids
###Output
_____no_output_____
###Markdown
**Checkpoint**. Let's check our Step 2 implementation. Letting rows 0, 10, ..., 90 of `tf_idf` as the data points and the cluster labels `[0, 1, 1, 0, 0, 2, 0, 2, 2, 1]`, we compute the next set of centroids. Each centroid is given by the average of all member data points in corresponding cluster.
###Code
result = revise_centroids(tf_idf[0:100:10], 3, np.array([0, 1, 1, 0, 0, 2, 0, 2, 2, 1]))
if np.allclose(result[0], np.mean(tf_idf[[0,30,40,60]].toarray(), axis=0)) and \
np.allclose(result[1], np.mean(tf_idf[[10,20,90]].toarray(), axis=0)) and \
np.allclose(result[2], np.mean(tf_idf[[50,70,80]].toarray(), axis=0)):
print('Pass')
else:
print('Check your code')
###Output
Pass
###Markdown
Assessing convergence How can we tell if the k-means algorithm is converging? We can look at the cluster assignments and see if they stabilize over time. In fact, we'll be running the algorithm until the cluster assignments stop changing at all. To be extra safe, and to assess the clustering performance, we'll be looking at an additional criteria: the sum of all squared distances between data points and centroids. This is defined as$$J(\mathcal{Z},\mu) = \sum_{j=1}^k \sum_{i:z_i = j} \|\mathbf{x}_i - \mu_j\|^2.$$The smaller the distances, the more homogeneous the clusters are. In other words, we'd like to have "tight" clusters.
###Code
def compute_heterogeneity(data, k, centroids, cluster_assignment):
heterogeneity = 0.0
for i in xrange(k):
# Select all data points that belong to cluster i. Fill in the blank (RHS only)
member_data_points = data[cluster_assignment==i, :]
if member_data_points.shape[0] > 0: # check if i-th cluster is non-empty
# Compute distances from centroid to data points (RHS only)
distances = pairwise_distances(member_data_points, [centroids[i]], metric='euclidean')
squared_distances = distances**2
heterogeneity += np.sum(squared_distances)
return heterogeneity
###Output
_____no_output_____
###Markdown
Let's compute the cluster heterogeneity for the 2-cluster example we've been considering based on our current cluster assignments and centroids.
###Code
compute_heterogeneity(data, 2, centroids, cluster_assignment)
###Output
_____no_output_____
###Markdown
Combining into a single function Once the two k-means steps have been implemented, as well as our heterogeneity metric we wish to monitor, it is only a matter of putting these functions together to write a k-means algorithm that* Repeatedly performs Steps 1 and 2* Tracks convergence metrics* Stops if either no assignment changed or we reach a certain number of iterations.
###Code
# Fill in the blanks
def kmeans(data, k, initial_centroids, maxiter, record_heterogeneity=None, verbose=False):
'''This function runs k-means on given data and initial set of centroids.
maxiter: maximum number of iterations to run.
record_heterogeneity: (optional) a list, to store the history of heterogeneity as function of iterations
if None, do not store the history.
verbose: if True, print how many data points changed their cluster labels in each iteration'''
centroids = initial_centroids[:]
prev_cluster_assignment = None
for itr in xrange(maxiter):
if verbose:
print(itr)
# 1. Make cluster assignments using nearest centroids
# YOUR CODE HERE
cluster_assignment = assign_clusters(data,centroids)
# 2. Compute a new centroid for each of the k clusters, averaging all data points assigned to that cluster.
# YOUR CODE HERE
centroids = revise_centroids(data,k,cluster_assignment)
# Check for convergence: if none of the assignments changed, stop
if prev_cluster_assignment is not None and \
(prev_cluster_assignment==cluster_assignment).all():
break
# Print number of new assignments
if prev_cluster_assignment is not None:
num_changed = np.sum(prev_cluster_assignment!=cluster_assignment)
if verbose:
print(' {0:5d} elements changed their cluster assignment.'.format(num_changed))
# Record heterogeneity convergence metric
if record_heterogeneity is not None:
# YOUR CODE HERE
score = compute_heterogeneity(data,k,centroids,cluster_assignment)
record_heterogeneity.append(score)
prev_cluster_assignment = cluster_assignment[:]
return centroids, cluster_assignment
###Output
_____no_output_____
###Markdown
Plotting convergence metric We can use the above function to plot the convergence metric across iterations.
###Code
def plot_heterogeneity(heterogeneity, k):
plt.figure(figsize=(7,4))
plt.plot(heterogeneity, linewidth=4)
plt.xlabel('# Iterations')
plt.ylabel('Heterogeneity')
plt.title('Heterogeneity of clustering over time, K={0:d}'.format(k))
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Let's consider running k-means with K=3 clusters for a maximum of 400 iterations, recording cluster heterogeneity at every step. Then, let's plot the heterogeneity over iterations using the plotting function above.
###Code
k = 3
heterogeneity = []
initial_centroids = get_initial_centroids(tf_idf, k, seed=0)
centroids, cluster_assignment = kmeans(tf_idf, k, initial_centroids, maxiter=400,
record_heterogeneity=heterogeneity, verbose=True)
plot_heterogeneity(heterogeneity, k)
np.bincount(cluster_assignment)
###Output
_____no_output_____
###Markdown
**Quiz Question**. (True/False) The clustering objective (heterogeneity) is non-increasing for this example. **Quiz Question**. Let's step back from this particular example. If the clustering objective (heterogeneity) would ever increase when running k-means, that would indicate: (choose one)1. k-means algorithm got stuck in a bad local minimum2. There is a bug in the k-means code3. All data points consist of exact duplicates4. Nothing is wrong. The objective should generally go down sooner or later. **Quiz Question**. Which of the cluster contains the greatest number of data points in the end? Hint: Use [`np.bincount()`](http://docs.scipy.org/doc/numpy-1.11.0/reference/generated/numpy.bincount.html) to count occurrences of each cluster label. 1. Cluster 0 2. Cluster 1 3. Cluster 2 Beware of local maxima One weakness of k-means is that it tends to get stuck in a local minimum. To see this, let us run k-means multiple times, with different initial centroids created using different random seeds.**Note:** Again, in practice, you should set different seeds for every run. We give you a list of seeds for this assignment so that everyone gets the same answer.This may take several minutes to run.
###Code
k = 10
heterogeneity = {}
import time
start = time.time()
for seed in [0, 20000, 40000, 60000, 80000, 100000, 120000]:
initial_centroids = get_initial_centroids(tf_idf, k, seed)
centroids, cluster_assignment = kmeans(tf_idf, k, initial_centroids, maxiter=400,
record_heterogeneity=None, verbose=False)
# To save time, compute heterogeneity only once in the end
heterogeneity[seed] = compute_heterogeneity(tf_idf, k, centroids, cluster_assignment)
print(np.bincount(cluster_assignment).max())
print('seed={0:06d}, heterogeneity={1:.5f}'.format(seed, heterogeneity[seed]))
sys.stdout.flush()
end = time.time()
print(end-start)
###Output
18047
seed=000000, heterogeneity=57457.52442
15779
seed=020000, heterogeneity=57533.20100
18132
seed=040000, heterogeneity=57512.69257
17900
seed=060000, heterogeneity=57466.97925
17582
seed=080000, heterogeneity=57494.92990
16969
seed=100000, heterogeneity=57484.42210
16481
seed=120000, heterogeneity=57554.62410
419.935226917
###Markdown
Notice the variation in heterogeneity for different initializations. This indicates that k-means sometimes gets stuck at a bad local minimum. **Quiz Question**. Another way to capture the effect of changing initialization is to look at the distribution of cluster assignments. Add a line to the code above to compute the size ( of member data points) of clusters for each run of k-means. Look at the size of the largest cluster (most of member data points) across multiple runs, with seeds 0, 20000, ..., 120000. How much does this measure vary across the runs? What is the minimum and maximum values this quantity takes? One effective way to counter this tendency is to use **k-means++** to provide a smart initialization. This method tries to spread out the initial set of centroids so that they are not too close together. It is known to improve the quality of local optima and lower average runtime.
###Code
def smart_initialize(data, k, seed=None):
'''Use k-means++ to initialize a good set of centroids'''
if seed is not None: # useful for obtaining consistent results
np.random.seed(seed)
centroids = np.zeros((k, data.shape[1]))
# Randomly choose the first centroid.
# Since we have no prior knowledge, choose uniformly at random
idx = np.random.randint(data.shape[0])
centroids[0] = data[idx,:].toarray()
# Compute distances from the first centroid chosen to all the other data points
squared_distances = pairwise_distances(data, centroids[0:1], metric='euclidean').flatten()**2
for i in xrange(1, k):
# Choose the next centroid randomly, so that the probability for each data point to be chosen
# is directly proportional to its squared distance from the nearest centroid.
# Roughtly speaking, a new centroid should be as far as from ohter centroids as possible.
idx = np.random.choice(data.shape[0], 1, p=squared_distances/sum(squared_distances))
centroids[i] = data[idx,:].toarray()
# Now compute distances from the centroids to all data points
squared_distances = np.min(pairwise_distances(data, centroids[0:i+1], metric='euclidean')**2,axis=1)
return centroids
###Output
_____no_output_____
###Markdown
Let's now rerun k-means with 10 clusters using the same set of seeds, but always using k-means++ to initialize the algorithm.This may take several minutes to run.
###Code
k = 10
heterogeneity_smart = {}
start = time.time()
for seed in [0, 20000, 40000, 60000, 80000, 100000, 120000]:
initial_centroids = smart_initialize(tf_idf, k, seed)
centroids, cluster_assignment = kmeans(tf_idf, k, initial_centroids, maxiter=400,
record_heterogeneity=None, verbose=False)
# To save time, compute heterogeneity only once in the end
heterogeneity_smart[seed] = compute_heterogeneity(tf_idf, k, centroids, cluster_assignment)
print('seed={0:06d}, heterogeneity={1:.5f}'.format(seed, heterogeneity_smart[seed]))
sys.stdout.flush()
end = time.time()
print(end-start)
###Output
seed=000000, heterogeneity=57468.63808
seed=020000, heterogeneity=57486.94263
seed=040000, heterogeneity=57454.35926
seed=060000, heterogeneity=57530.43659
seed=080000, heterogeneity=57454.51852
seed=100000, heterogeneity=57471.56674
seed=120000, heterogeneity=57523.28839
504.426817894
###Markdown
Let's compare the set of cluster heterogeneities we got from our 7 restarts of k-means using random initialization compared to the 7 restarts of k-means using k-means++ as a smart initialization.The following code produces a [box plot](http://matplotlib.org/api/pyplot_api.html) for each of these methods, indicating the spread of values produced by each method.
###Code
plt.figure(figsize=(8,5))
plt.boxplot([heterogeneity.values(), heterogeneity_smart.values()], vert=False)
plt.yticks([1, 2], ['k-means', 'k-means++'])
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
###Output
_____no_output_____
###Markdown
A few things to notice from the box plot:* On average, k-means++ produces a better clustering than Random initialization.* Variation in clustering quality is smaller for k-means++. **In general, you should run k-means at least a few times with different initializations and then return the run resulting in the lowest heterogeneity.** Let us write a function that runs k-means multiple times and picks the best run that minimizes heterogeneity. The function accepts an optional list of seed values to be used for the multiple runs; if no such list is provided, the current UTC time is used as seed values.
###Code
def kmeans_multiple_runs(data, k, maxiter, num_runs, seed_list=None, verbose=False):
heterogeneity = {}
min_heterogeneity_achieved = float('inf')
best_seed = None
final_centroids = None
final_cluster_assignment = None
for i in xrange(num_runs):
# Use UTC time if no seeds are provided
if seed_list is not None:
seed = seed_list[i]
np.random.seed(seed)
else:
seed = int(time.time())
np.random.seed(seed)
# Use k-means++ initialization
# YOUR CODE HERE
initial_centroids = smart_initialize(data,k,seed)
# Run k-means
# YOUR CODE HERE
centroids, cluster_assignment = kmeans(data, k, initial_centroids, maxiter,
record_heterogeneity=None, verbose=False)
# To save time, compute heterogeneity only once in the end
# YOUR CODE HERE
heterogeneity[seed] = compute_heterogeneity(data, k, centroids, cluster_assignment)
if verbose:
print('seed={0:06d}, heterogeneity={1:.5f}'.format(seed, heterogeneity[seed]))
sys.stdout.flush()
# if current measurement of heterogeneity is lower than previously seen,
# update the minimum record of heterogeneity.
if heterogeneity[seed] < min_heterogeneity_achieved:
min_heterogeneity_achieved = heterogeneity[seed]
best_seed = seed
final_centroids = centroids
final_cluster_assignment = cluster_assignment
# Return the centroids and cluster assignments that minimize heterogeneity.
return final_centroids, final_cluster_assignment
###Output
_____no_output_____
###Markdown
How to choose K Since we are measuring the tightness of the clusters, a higher value of K reduces the possible heterogeneity metric by definition. For example, if we have N data points and set K=N clusters, then we could have 0 cluster heterogeneity by setting the N centroids equal to the values of the N data points. (Note: Not all runs for larger K will result in lower heterogeneity than a single run with smaller K due to local optima.) Let's explore this general trend for ourselves by performing the following analysis. Use the `kmeans_multiple_runs` function to run k-means with five different values of K. For each K, use k-means++ and multiple runs to pick the best solution. In what follows, we consider K=2,10,25,50,100 and 7 restarts for each setting.**IMPORTANT: The code block below will take about one hour to finish. We highly suggest that you use the arrays that we have computed for you.**Side note: In practice, a good implementation of k-means would utilize parallelism to run multiple runs of k-means at once. For an example, see [scikit-learn's KMeans](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html).
###Code
#def plot_k_vs_heterogeneity(k_values, heterogeneity_values):
# plt.figure(figsize=(7,4))
# plt.plot(k_values, heterogeneity_values, linewidth=4)
# plt.xlabel('K')
# plt.ylabel('Heterogeneity')
# plt.title('K vs. Heterogeneity')
# plt.rcParams.update({'font.size': 16})
# plt.tight_layout()
#start = time.time()
#centroids = {}
#cluster_assignment = {}
#heterogeneity_values = []
#k_list = [2, 10, 25, 50, 100]
#seed_list = [0, 20000, 40000, 60000, 80000, 100000, 120000]
#for k in k_list:
# heterogeneity = []
# centroids[k], cluster_assignment[k] = kmeans_multiple_runs(tf_idf, k, maxiter=400,
# num_runs=len(seed_list),
# seed_list=seed_list,
# verbose=True)
# score = compute_heterogeneity(tf_idf, k, centroids[k], cluster_assignment[k])
# heterogeneity_values.append(score)
#plot_k_vs_heterogeneity(k_list, heterogeneity_values)
#end = time.time()
#print(end-start)
###Output
_____no_output_____
###Markdown
To use the pre-computed NumPy arrays, first download kmeans-arrays.npz as mentioned in the reading for this assignment and load them with the following code. Make sure the downloaded file is in the same directory as this notebook.
###Code
def plot_k_vs_heterogeneity(k_values, heterogeneity_values):
plt.figure(figsize=(7,4))
plt.plot(k_values, heterogeneity_values, linewidth=4)
plt.xlabel('K')
plt.ylabel('Heterogeneity')
plt.title('K vs. Heterogeneity')
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
filename = 'kmeans-arrays.npz'
heterogeneity_values = []
k_list = [2, 10, 25, 50, 100]
if os.path.exists(filename):
arrays = np.load(filename)
centroids = {}
cluster_assignment = {}
for k in k_list:
print k
sys.stdout.flush()
'''To save memory space, do not load the arrays from the file right away. We use
a technique known as lazy evaluation, where some expressions are not evaluated
until later. Any expression appearing inside a lambda function doesn't get
evaluated until the function is called.
Lazy evaluation is extremely important in memory-constrained setting, such as
an Amazon EC2 t2.micro instance.'''
centroids[k] = lambda k=k: arrays['centroids_{0:d}'.format(k)]
cluster_assignment[k] = lambda k=k: arrays['cluster_assignment_{0:d}'.format(k)]
score = compute_heterogeneity(tf_idf, k, centroids[k](), cluster_assignment[k]())
heterogeneity_values.append(score)
plot_k_vs_heterogeneity(k_list, heterogeneity_values)
else:
print('File not found. Skipping.')
###Output
2
10
25
50
100
###Markdown
In the above plot we show that heterogeneity goes down as we increase the number of clusters. Does this mean we should always favor a higher K? **Not at all!** As we will see in the following section, setting K too high may end up separating data points that are actually pretty alike. At the extreme, we can set individual data points to be their own clusters (K=N) and achieve zero heterogeneity, but separating each data point into its own cluster is hardly a desirable outcome. In the following section, we will learn how to detect a K set "too large". Visualize clusters of documents Let's start visualizing some clustering results to see if we think the clustering makes sense. We can use such visualizations to help us assess whether we have set K too large or too small for a given application. Following the theme of this course, we will judge whether the clustering makes sense in the context of document analysis.What are we looking for in a good clustering of documents?* Documents in the same cluster should be similar.* Documents from different clusters should be less similar.So a bad clustering exhibits either of two symptoms:* Documents in a cluster have mixed content.* Documents with similar content are divided up and put into different clusters.To help visualize the clustering, we do the following:* Fetch nearest neighbors of each centroid from the set of documents assigned to that cluster. We will consider these documents as being representative of the cluster.* Print titles and first sentences of those nearest neighbors.* Print top 5 words that have highest tf-idf weights in each centroid.
###Code
def visualize_document_clusters(wiki, tf_idf, centroids, cluster_assignment, k, map_index_to_word, display_content=True):
'''wiki: original dataframe
tf_idf: data matrix, sparse matrix format
map_index_to_word: SFrame specifying the mapping betweeen words and column indices
display_content: if True, display 8 nearest neighbors of each centroid'''
print('==========================================================')
# Visualize each cluster c
for c in xrange(k):
# Cluster heading
print('Cluster {0:d} '.format(c)),
# Print top 5 words with largest TF-IDF weights in the cluster
idx = centroids[c].argsort()[::-1]
for i in xrange(5): # Print each word along with the TF-IDF weight
print('{0:s}:{1:.3f}'.format(map_index_to_word['category'][idx[i]], centroids[c,idx[i]])),
print('')
if display_content:
# Compute distances from the centroid to all data points in the cluster,
# and compute nearest neighbors of the centroids within the cluster.
distances = pairwise_distances(tf_idf, centroids[c].reshape(1, -1), metric='euclidean').flatten()
distances[cluster_assignment!=c] = float('inf') # remove non-members from consideration
nearest_neighbors = distances.argsort()
# For 8 nearest neighbors, print the title as well as first 180 characters of text.
# Wrap the text at 80-character mark.
for i in xrange(8):
text = ' '.join(wiki[nearest_neighbors[i]]['text'].split(None, 25)[0:25])
print('\n* {0:50s} {1:.5f}\n {2:s}\n {3:s}'.format(wiki[nearest_neighbors[i]]['name'],
distances[nearest_neighbors[i]], text[:90], text[90:180] if len(text) > 90 else ''))
print('==========================================================')
###Output
_____no_output_____
###Markdown
Let us first look at the 2 cluster case (K=2).
###Code
'''Notice the extra pairs of parentheses for centroids and cluster_assignment.
The centroid and cluster_assignment are still inside the npz file,
and we need to explicitly indicate when to load them into memory.'''
visualize_document_clusters(wiki, tf_idf, centroids[2](), cluster_assignment[2](), 2, map_index_to_word)
###Output
==========================================================
Cluster 0 she:0.025 her:0.017 music:0.012 he:0.011 university:0.011
* Anita Kunz 0.97401
anita e kunz oc born 1956 is a canadianborn artist and illustratorkunz has lived in london
new york and toronto contributing to magazines and working
* Janet Jackson 0.97472
janet damita jo jackson born may 16 1966 is an american singer songwriter and actress know
n for a series of sonically innovative socially conscious and
* Madonna (entertainer) 0.97475
madonna louise ciccone tkoni born august 16 1958 is an american singer songwriter actress
and businesswoman she achieved popularity by pushing the boundaries of lyrical
* %C3%81ine Hyland 0.97536
ine hyland ne donlon is emeritus professor of education and former vicepresident of univer
sity college cork ireland she was born in 1942 in athboy co
* Jane Fonda 0.97621
jane fonda born lady jayne seymour fonda december 21 1937 is an american actress writer po
litical activist former fashion model and fitness guru she is
* Christine Robertson 0.97643
christine mary robertson born 5 october 1948 is an australian politician and former austra
lian labor party member of the new south wales legislative council serving
* Pat Studdy-Clift 0.97643
pat studdyclift is an australian author specialising in historical fiction and nonfictionb
orn in 1925 she lived in gunnedah until she was sent to a boarding
* Alexandra Potter 0.97646
alexandra potter born 1970 is a british author of romantic comediesborn in bradford yorksh
ire england and educated at liverpool university gaining an honors degree in
==========================================================
Cluster 1 league:0.040 season:0.036 team:0.029 football:0.029 played:0.028
* Todd Williams 0.95468
todd michael williams born february 13 1971 in syracuse new york is a former major league
baseball relief pitcher he attended east syracuseminoa high school
* Gord Sherven 0.95622
gordon r sherven born august 21 1963 in gravelbourg saskatchewan and raised in mankota sas
katchewan is a retired canadian professional ice hockey forward who played
* Justin Knoedler 0.95639
justin joseph knoedler born july 17 1980 in springfield illinois is a former major league
baseball catcherknoedler was originally drafted by the st louis cardinals
* Chris Day 0.95648
christopher nicholas chris day born 28 july 1975 is an english professional footballer who
plays as a goalkeeper for stevenageday started his career at tottenham
* Tony Smith (footballer, born 1957) 0.95653
anthony tony smith born 20 february 1957 is a former footballer who played as a central de
fender in the football league in the 1970s and
* Ashley Prescott 0.95761
ashley prescott born 11 september 1972 is a former australian rules footballer he played w
ith the richmond and fremantle football clubs in the afl between
* Leslie Lea 0.95802
leslie lea born 5 october 1942 in manchester is an english former professional footballer
he played as a midfielderlea began his professional career with blackpool
* Tommy Anderson (footballer) 0.95818
thomas cowan tommy anderson born 24 september 1934 in haddington is a scottish former prof
essional footballer he played as a forward and was noted for
==========================================================
###Markdown
Both clusters have mixed content, although cluster 1 is much purer than cluster 0:* Cluster 0: artists, songwriters, professors, politicians, writers, etc.* Cluster 1: baseball players, hockey players, soccer (association football) players, etc.Top words of cluster 1 are all related to sports, whereas top words of cluster 0 show no clear pattern.Roughly speaking, the entire dataset was divided into athletes and non-athletes. It would be better if we sub-divided non-atheletes into more categories. So let us use more clusters. How about `K=10`?
###Code
k = 10
visualize_document_clusters(wiki, tf_idf, centroids[k](), cluster_assignment[k](), k, map_index_to_word)
###Output
==========================================================
Cluster 0 film:0.020 art:0.014 he:0.011 book:0.010 television:0.010
* Wilson McLean 0.97479
wilson mclean born 1937 is a scottish illustrator and artist he has illustrated primarily
in the field of advertising but has also provided cover art
* Anton Hecht 0.97748
anton hecht is an english artist born in london in 2007 he asked musicians from around the
durham area to contribute to a soundtrack for
* David Salle 0.97800
david salle born 1952 is an american painter printmaker and stage designer who helped defi
ne postmodern sensibility salle was born in norman oklahoma he earned
* Vipin Sharma 0.97805
vipin sharma is an indian actor born in new delhi he is a graduate of national school of d
rama new delhi india and the canadian
* Paul Swadel 0.97823
paul swadel is a new zealand film director and producerhe has directed and produced many s
uccessful short films which have screened in competition at cannes
* Allan Stratton 0.97834
allan stratton born 1951 is a canadian playwright and novelistborn in stratford ontario st
ratton began his professional arts career while he was still in high
* Bill Bennett (director) 0.97848
bill bennett born 1953 is an australian film director producer and screenwriterhe dropped
out of medicine at queensland university in 1972 and joined the australian
* Rafal Zielinski 0.97850
rafal zielinski born 1957 montreal is an independent filmmaker he is best known for direct
ing films such as fun sundance film festival special jury award
==========================================================
Cluster 1 league:0.052 rugby:0.044 club:0.042 cup:0.042 season:0.041
* Chris Day 0.93220
christopher nicholas chris day born 28 july 1975 is an english professional footballer who
plays as a goalkeeper for stevenageday started his career at tottenham
* Gary Hooper 0.93481
gary hooper born 26 january 1988 is an english professional footballer who plays as a forw
ard for norwich cityhooper started his career at nonleague grays
* Tony Smith (footballer, born 1957) 0.93504
anthony tony smith born 20 february 1957 is a former footballer who played as a central de
fender in the football league in the 1970s and
* Jason Roberts (footballer) 0.93527
jason andre davis roberts mbe born 25 january 1978 is a former professional footballer and
now a football punditborn in park royal london roberts was
* Paul Robinson (footballer, born 1979) 0.93587
paul william robinson born 15 october 1979 is an english professional footballer who plays
for blackburn rovers as a goalkeeper he is a former england
* Alex Lawless 0.93732
alexander graham alex lawless born 26 march 1985 is a welsh professional footballer who pl
ays for luton town as a midfielderlawless began his career with
* Neil Grayson 0.93748
neil grayson born 1 november 1964 in york is an english footballer who last played as a st
riker for sutton towngraysons first club was local
* Sol Campbell 0.93759
sulzeer jeremiah sol campbell born 18 september 1974 is a former england international foo
tballer a central defender he had a 19year career playing in the
==========================================================
Cluster 2 championships:0.040 tour:0.037 championship:0.032 world:0.029 won:0.029
* Alessandra Aguilar 0.94505
alessandra aguilar born 1 july 1978 in lugo is a spanish longdistance runner who specialis
es in marathon running she represented her country in the event
* Heather Samuel 0.94529
heather barbara samuel born 6 july 1970 is a retired sprinter from antigua and barbuda who
specialized in the 100 and 200 metres in 1990
* Viola Kibiwot 0.94617
viola jelagat kibiwot born december 22 1983 in keiyo district is a runner from kenya who s
pecialises in the 1500 metres kibiwot won her first
* Ayelech Worku 0.94636
ayelech worku born june 12 1979 is an ethiopian longdistance runner most known for winning
two world championships bronze medals on the 5000 metres she
* Morhad Amdouni 0.94763
morhad amdouni born 21 january 1988 in portovecchio is a french middle and longdistance ru
nner he was european junior champion in track and cross country
* Krisztina Papp 0.94776
krisztina papp born 17 december 1982 in eger is a hungarian long distance runner she is th
e national indoor record holder over 5000 mpapp began
* Petra Lammert 0.94869
petra lammert born 3 march 1984 in freudenstadt badenwrttemberg is a former german shot pu
tter and current bobsledder she was the 2009 european indoor champion
* Hasan Mahboob 0.94880
hasan mahboob ali born silas kirui on 31 december 1981 in kapsabet is a bahraini longdista
nce runner he became naturalized in bahrain and switched from
==========================================================
Cluster 3 baseball:0.110 league:0.103 major:0.052 games:0.047 season:0.045
* Steve Springer 0.89300
steven michael springer born february 11 1961 is an american former professional baseball
player who appeared in major league baseball as a third baseman and
* Dave Ford 0.89547
david alan ford born december 29 1956 is a former major league baseball pitcher for the ba
ltimore orioles born in cleveland ohio ford attended lincolnwest
* Todd Williams 0.89820
todd michael williams born february 13 1971 in syracuse new york is a former major league
baseball relief pitcher he attended east syracuseminoa high school
* Justin Knoedler 0.90035
justin joseph knoedler born july 17 1980 in springfield illinois is a former major league
baseball catcherknoedler was originally drafted by the st louis cardinals
* Kevin Nicholson (baseball) 0.90643
kevin ronald nicholson born march 29 1976 is a canadian baseball shortstop he played part
of the 2000 season for the san diego padres of
* James Baldwin (baseball) 0.90648
james j baldwin jr born july 15 1971 is a former major league baseball pitcher he batted a
nd threw righthanded in his 11season career he
* Joe Strong 0.90655
joseph benjamin strong born september 9 1962 in fairfield california is a former major lea
gue baseball pitcher who played for the florida marlins from 2000
* Javier L%C3%B3pez (baseball) 0.90691
javier alfonso lpez born july 11 1977 is a puerto rican professional baseball pitcher for
the san francisco giants of major league baseball he is
==========================================================
Cluster 4 research:0.038 university:0.035 professor:0.032 science:0.023 institute:0.019
* Lawrence W. Green 0.95957
lawrence w green is best known by health education researchers as the originator of the pr
ecede model and codeveloper of the precedeproceed model which has
* Timothy Luke 0.96057
timothy w luke is university distinguished professor of political science in the college o
f liberal arts and human sciences as well as program chair of
* Ren%C3%A9e Fox 0.96100
rene c fox a summa cum laude graduate of smith college in 1949 earned her phd in sociology
in 1954 from radcliffe college harvard university
* Francis Gavin 0.96323
francis j gavin is first frank stanton chair in nuclear security policy studies and profes
sor of political science at mit before joining mit he was
* Catherine Hakim 0.96374
catherine hakim born 30 may 1948 is a british sociologist who specialises in womens employ
ment and womens issues she is currently a professorial research fellow
* Stephen Park Turner 0.96405
stephen turner is a researcher in social practice social and political theory and the phil
osophy of the social sciences he is graduate research professor in
* Robert Bates (political scientist) 0.96489
robert hinrichs bates born 1942 is an american political scientist he is eaton professor o
f the science of government in the departments of government and
* Georg von Krogh 0.96505
georg von krogh was born in oslo norway he is a professor at eth zurich and holds the chai
r of strategic management and innovation he
==========================================================
Cluster 5 football:0.076 coach:0.060 basketball:0.056 season:0.044 played:0.037
* Todd Curley 0.92731
todd curley born 14 january 1973 is a former australian rules footballer who played for co
llingwood and the western bulldogs in the australian football league
* Ashley Prescott 0.92992
ashley prescott born 11 september 1972 is a former australian rules footballer he played w
ith the richmond and fremantle football clubs in the afl between
* Pete Richardson 0.93204
pete richardson born october 17 1946 in youngstown ohio is a former american football defe
nsive back in the national football league and former college head
* Nathan Brown (Australian footballer born 1976) 0.93561
nathan daniel brown born 14 august 1976 is an australian rules footballer who played for t
he melbourne demons in the australian football leaguehe was drafted
* Earl Spalding 0.93654
earl spalding born 11 march 1965 in south perth is a former australian rules footballer wh
o played for melbourne and carlton in the victorian football
* Bud Grant 0.93766
harry peter bud grant jr born may 20 1927 is a former american football and canadian footb
all head coach grant served as the head coach
* Tyrone Wheatley 0.93885
tyrone anthony wheatley born january 19 1972 is the running backs coach of michigan and a
former professional american football player who played 10 seasons
* Nick Salter 0.93916
nick salter born 30 july 1987 is an australian rules footballer who played for port adelai
de football club in the australian football league aflhe was
==========================================================
Cluster 6 she:0.138 her:0.089 actress:0.014 film:0.013 miss:0.012
* Lauren Royal 0.93445
lauren royal born march 3 circa 1965 is a book writer from california royal has written bo
th historic and novelistic booksa selfproclaimed angels baseball fan
* Barbara Hershey 0.93496
barbara hershey born barbara lynn herzstein february 5 1948 once known as barbara seagull
is an american actress in a career spanning nearly 50 years
* Janet Jackson 0.93559
janet damita jo jackson born may 16 1966 is an american singer songwriter and actress know
n for a series of sonically innovative socially conscious and
* Jane Fonda 0.93759
jane fonda born lady jayne seymour fonda december 21 1937 is an american actress writer po
litical activist former fashion model and fitness guru she is
* Janine Shepherd 0.93833
janine lee shepherd am born 1962 is an australian pilot and former crosscountry skier shep
herds career as an athlete ended when she suffered major injuries
* Ellina Graypel 0.93847
ellina graypel born july 19 1972 is an awardwinning russian singersongwriter she was born
near the volga river in the heart of russia she spent
* Alexandra Potter 0.93858
alexandra potter born 1970 is a british author of romantic comediesborn in bradford yorksh
ire england and educated at liverpool university gaining an honors degree in
* Melissa Hart (actress) 0.93913
melissa hart is an american actress singer and teacher she made her broadway debut in 1966
as an ensemble member in jerry bocks the apple
==========================================================
Cluster 7 music:0.057 album:0.040 band:0.035 orchestra:0.023 released:0.022
* Brenton Broadstock 0.95722
brenton broadstock ao born 1952 is an australian composerbroadstock was born in melbourne
he studied history politics and music at monash university and later composition
* Prince (musician) 0.96057
prince rogers nelson born june 7 1958 known by his mononym prince is an american singerson
gwriter multiinstrumentalist and actor he has produced ten platinum albums
* Will.i.am 0.96066
william adams born march 15 1975 known by his stage name william pronounced will i am is a
n american rapper songwriter entrepreneur actor dj record
* Tom Bancroft 0.96117
tom bancroft born 1967 london is a british jazz drummer and composer he began drumming age
d seven and started off playing jazz with his father
* Julian Knowles 0.96152
julian knowles is an australian composer and performer specialising in new and emerging te
chnologies his creative work spans the fields of composition for theatre dance
* Dan Siegel (musician) 0.96223
dan siegel born in seattle washington is a pianist composer and record producer his earlie
r music has been described as new age while his more
* Tony Mills (musician) 0.96238
tony mills born 7 july 1962 in solihull england is an english rock singer best known for h
is work with shy and tnthailing from birmingham
* Don Robertson (composer) 0.96249
don robertson born 1942 is an american composerdon robertson was born in 1942 in denver co
lorado and began studying music with conductor and pianist antonia
==========================================================
Cluster 8 hockey:0.216 nhl:0.134 ice:0.065 season:0.053 league:0.047
* Gord Sherven 0.83598
gordon r sherven born august 21 1963 in gravelbourg saskatchewan and raised in mankota sas
katchewan is a retired canadian professional ice hockey forward who played
* Eric Brewer 0.83765
eric peter brewer born april 17 1979 is a canadian professional ice hockey defenceman for
the anaheim ducks of the national hockey league nhl he
* Stephen Johns (ice hockey) 0.84580
stephen johns born april 18 1992 is an american professional ice hockey defenceman he is c
urrently playing with the rockford icehogs of the american hockey
* Mike Stevens (ice hockey, born 1965) 0.85320
mike stevens born december 30 1965 in kitchener ontario is a retired professional ice hock
ey player who played 23 games in the national hockey league
* Tanner Glass 0.85484
tanner glass born november 29 1983 is a canadian professional ice hockey winger who plays
for the new york rangers of the national hockey league
* Todd Strueby 0.86053
todd kenneth strueby born june 15 1963 in lanigan saskatchewan and raised in humboldt sask
atchewan is a retired canadian professional ice hockey centre who played
* Steven King (ice hockey) 0.86129
steven andrew king born july 22 1969 in east greenwich rhode island is a former ice hockey
forward who played professionally from 1991 to 2000
* Don Jackson (ice hockey) 0.86661
donald clinton jackson born september 2 1956 in minneapolis minnesota and bloomington minn
esota is an ice hockey coach and a retired professional ice hockey player
==========================================================
Cluster 9 party:0.028 election:0.025 minister:0.025 served:0.021 law:0.019
* Doug Lewis 0.96516
douglas grinslade doug lewis pc qc born april 17 1938 is a former canadian politician a ch
artered accountant and lawyer by training lewis entered the
* David Anderson (British Columbia politician) 0.96530
david a anderson pc oc born august 16 1937 in victoria british columbia is a former canadi
an cabinet minister educated at victoria college in victoria
* Lucienne Robillard 0.96679
lucienne robillard pc born june 16 1945 is a canadian politician and a member of the liber
al party of canada she sat in the house
* Bob Menendez 0.96686
robert bob menendez born january 1 1954 is the senior united states senator from new jerse
y he is a member of the democratic party first
* Mal Sandon 0.96706
malcolm john mal sandon born 16 september 1945 is an australian politician he was an austr
alian labor party member of the victorian legislative council from
* Roger Price (Australian politician) 0.96717
leo roger spurway price born 26 november 1945 is a former australian politician he was ele
cted as a member of the australian house of representatives
* Maureen Lyster 0.96734
maureen anne lyster born 10 september 1943 is an australian politician she was an australi
an labor party member of the victorian legislative assembly from 1985
* Don Bell 0.96739
donald h bell born march 10 1942 in new westminster british columbia is a canadian politic
ian he is currently serving as a councillor for the
==========================================================
###Markdown
Clusters 0, 1, and 5 appear to be still mixed, but others are quite consistent in content.* Cluster 0: artists, actors, film directors, playwrights* Cluster 1: soccer (association football) players, rugby players* Cluster 2: track and field athletes* Cluster 3: baseball players* Cluster 4: professors, researchers, scholars* Cluster 5: Austrailian rules football players, American football players* Cluster 6: female figures from various fields* Cluster 7: composers, songwriters, singers, music producers* Cluster 8: ice hockey players* Cluster 9: politiciansClusters are now more pure, but some are qualitatively "bigger" than others. For instance, the category of scholars is more general than the category of baseball players. Increasing the number of clusters may split larger clusters. Another way to look at the size of the clusters is to count the number of articles in each cluster.
###Code
np.bincount(cluster_assignment[10]())
###Output
_____no_output_____
###Markdown
**Quiz Question**. Which of the 10 clusters above contains the greatest number of articles?1. Cluster 0: artists, actors, film directors, playwrights2. Cluster 4: professors, researchers, scholars3. Cluster 5: Austrailian rules football players, American football players4. Cluster 7: composers, songwriters, singers, music producers5. Cluster 9: politicians **Quiz Question**. Which of the 10 clusters contains the least number of articles?1. Cluster 1: soccer (association football) players, rugby players2. Cluster 3: baseball players3. Cluster 6: female figures from various fields4. Cluster 7: composers, songwriters, singers, music producers5. Cluster 8: ice hockey players There appears to be at least some connection between the topical consistency of a cluster and the number of its member data points. Let us visualize the case for K=25. For the sake of brevity, we do not print the content of documents. It turns out that the top words with highest TF-IDF weights in each cluster are representative of the cluster.
###Code
visualize_document_clusters(wiki, tf_idf, centroids[25](), cluster_assignment[25](), 25,
map_index_to_word, display_content=False) # turn off text for brevity
###Output
==========================================================
Cluster 0 law:0.077 district:0.048 court:0.046 republican:0.038 senate:0.038
==========================================================
Cluster 1 research:0.054 professor:0.033 science:0.032 university:0.031 physics:0.029
==========================================================
Cluster 2 hockey:0.216 nhl:0.134 ice:0.065 season:0.052 league:0.047
==========================================================
Cluster 3 party:0.065 election:0.042 elected:0.031 parliament:0.027 member:0.023
==========================================================
Cluster 4 board:0.025 president:0.023 chairman:0.022 business:0.022 executive:0.020
==========================================================
Cluster 5 minister:0.160 prime:0.056 cabinet:0.044 party:0.043 election:0.042
==========================================================
Cluster 6 university:0.044 professor:0.037 studies:0.035 history:0.034 philosophy:0.031
==========================================================
Cluster 7 election:0.066 manitoba:0.058 liberal:0.051 party:0.045 riding:0.043
==========================================================
Cluster 8 racing:0.095 formula:0.056 championship:0.054 race:0.052 poker:0.051
==========================================================
Cluster 9 economics:0.146 economic:0.096 economist:0.053 policy:0.048 research:0.043
==========================================================
Cluster 10 championships:0.075 olympics:0.050 marathon:0.048 metres:0.048 she:0.048
==========================================================
Cluster 11 she:0.144 her:0.092 miss:0.016 actress:0.015 television:0.012
==========================================================
Cluster 12 he:0.011 radio:0.009 show:0.009 that:0.009 his:0.009
==========================================================
Cluster 13 baseball:0.109 league:0.104 major:0.052 games:0.047 season:0.045
==========================================================
Cluster 14 art:0.144 museum:0.076 gallery:0.056 artist:0.033 arts:0.031
==========================================================
Cluster 15 football:0.125 afl:0.060 nfl:0.051 season:0.049 played:0.045
==========================================================
Cluster 16 music:0.097 jazz:0.061 piano:0.033 composer:0.029 orchestra:0.028
==========================================================
Cluster 17 league:0.052 rugby:0.044 club:0.043 cup:0.042 season:0.042
==========================================================
Cluster 18 poetry:0.055 novel:0.045 book:0.042 published:0.039 fiction:0.035
==========================================================
Cluster 19 film:0.095 theatre:0.038 films:0.035 directed:0.029 television:0.028
==========================================================
Cluster 20 album:0.064 band:0.049 music:0.037 released:0.033 song:0.025
==========================================================
Cluster 21 bishop:0.075 air:0.066 force:0.048 church:0.047 command:0.045
==========================================================
Cluster 22 orchestra:0.146 opera:0.116 symphony:0.106 conductor:0.077 music:0.064
==========================================================
Cluster 23 basketball:0.120 coach:0.105 nba:0.065 head:0.042 season:0.040
==========================================================
Cluster 24 tour:0.256 pga:0.213 golf:0.142 open:0.073 golfer:0.062
==========================================================
###Markdown
Looking at the representative examples and top words, we classify each cluster as follows. Notice the bolded items, which indicate the appearance of a new theme.* Cluster 0: **lawyers, judges, legal scholars*** Cluster 1: **professors, researchers, scholars (natural and health sciences)*** Cluster 2: ice hockey players* Cluster 3: politicans* Cluster 4: **government officials*** Cluster 5: politicans* Cluster 6: **professors, researchers, scholars (social sciences and humanities)*** Cluster 7: Canadian politicians* Cluster 8: **car racers*** Cluster 9: **economists*** Cluster 10: track and field athletes* Cluster 11: females from various fields* Cluster 12: (mixed; no clear theme)* Cluster 13: baseball players* Cluster 14: **painters, sculptors, artists*** Cluster 15: Austrailian rules football players, American football players* Cluster 16: **musicians, composers*** Cluster 17: soccer (association football) players, rugby players* Cluster 18: **poets*** Cluster 19: **film directors, playwrights*** Cluster 20: **songwriters, singers, music producers*** Cluster 21: **generals of U.S. Air Force*** Cluster 22: **music directors, conductors*** Cluster 23: **basketball players*** Cluster 24: **golf players**Indeed, increasing K achieved the desired effect of breaking up large clusters. Depending on the application, this may or may not be preferable to the K=10 analysis.Let's take it to the extreme and set K=100. We have a suspicion that this value is too large. Let us look at the top words from each cluster:
###Code
k=100
visualize_document_clusters(wiki, tf_idf, centroids[k](), cluster_assignment[k](), k,
map_index_to_word, display_content=False)
# turn off text for brevity -- turn it on if you are curious ;)
np.bincount(cluster_assignment[100]())
###Output
_____no_output_____ |
Telecom_Data.ipynb | ###Markdown
Business ProblemA telecommunication company is facing dip in revenue due to customer attrition and was looking for ways to tackle the issue. - Analysis of the current churn data and look for patterns- Possible churn prediction system Why solve using data science?Not all problems needs to be solved using ML/DL techniques!Justification in this use case:- Customer churn does not happen with specific set of factors. Factors may overlap or there many too many resons for the churn.- Scalability: As the organization gets more customers having ML solutions to handle them will be lot better than doing manual analysis.With these justifications, lets get our customers data and understand. Advantages of having a churn prediction systemClient can react in time and retain the customers by making a special offer according to the preference Dataset RecievedTelecom users datasethttps://www.kaggle.com/radmirzosimov/telecom-users-dataset
###Code
!pip install plotly==4.14.3
# Import Libraries
import pandas as pd
import numpy as np
import plotly.express as px
import seaborn as sns
from matplotlib import pyplot
from sklearn.model_selection import train_test_split
from sklearn.ensemble import AdaBoostClassifier
from sklearn import metrics
path_to_file = 'https://raw.githubusercontent.com/SSaishruthi/Useful_links/master/telecom_users.csv'
# Read the input data
df_data = pd.read_csv(path_to_file)
# Take first few rows
df_data.head()
###Output
_____no_output_____
###Markdown
Data Glossary- customerID - customer id- gender - client gender (male / female)- SeniorCitizen - is the client retired (1, 0)- Partner - is the client married (Yes, No)- tenure - how many months a person has been a client of the company- PhoneService - is the telephone service connected (Yes, No)- MultipleLines - are multiple phone lines connected (Yes, No, No phone service)- InternetService - client's Internet service provider (DSL, Fiber optic, No)- OnlineSecurity - is the online security service connected (Yes, No, No internet service)- OnlineBackup - is the online backup service activated (Yes, No, No internet service)- DeviceProtection - does the client have equipment insurance (Yes, No, No internet service)- TechSupport - is the technical support service connected (Yes, No, No internet service)- StreamingTV - is the streaming TV service connected (Yes, No, No internet service)- StreamingMovies - is the streaming cinema service activated (Yes, No, No internet service)- Contract - type of customer contract (Month-to-month, One year, Two year)- PaperlessBilling - whether the client uses paperless billing (Yes, No)- PaymentMethod - payment method (Electronic check, Mailed check, Bank transfer (automatic), Credit card (automatic))- MonthlyCharges - current monthly payment- TotalCharges - the total amount that the client paid for the services for the entire time- Churn - whether there was a churn (Yes or No) Exploratory Data Analysis
###Code
# Remove ids from the analysis
df_data = df_data.drop(columns=['Unnamed: 0', 'customerID'])
df_data.head()
# Let's see of we have any missing values
df_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5986 entries, 0 to 5985
Data columns (total 20 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 gender 5986 non-null object
1 SeniorCitizen 5986 non-null int64
2 Partner 5986 non-null object
3 Dependents 5986 non-null object
4 tenure 5986 non-null int64
5 PhoneService 5986 non-null object
6 MultipleLines 5986 non-null object
7 InternetService 5986 non-null object
8 OnlineSecurity 5986 non-null object
9 OnlineBackup 5986 non-null object
10 DeviceProtection 5986 non-null object
11 TechSupport 5986 non-null object
12 StreamingTV 5986 non-null object
13 StreamingMovies 5986 non-null object
14 Contract 5986 non-null object
15 PaperlessBilling 5986 non-null object
16 PaymentMethod 5986 non-null object
17 MonthlyCharges 5986 non-null float64
18 TotalCharges 5986 non-null object
19 Churn 5986 non-null object
dtypes: float64(1), int64(2), object(17)
memory usage: 935.4+ KB
###Markdown
 From the data gloassary, we can observe that the `TotalCharges` is a number but it is in `object` type. Let's analyze that.
###Code
df_data['TotalCharges'].value_counts()
###Output
_____no_output_____
###Markdown
Looks like there are about 10 blank values in the `TotalCharges` field. Let's update the values.
###Code
# Observe that TotalCharges have blank values
print('Before removing blank values')
print(df_data[df_data['TotalCharges'] == ' '].index)
df_data['TotalCharges'] = df_data['TotalCharges'].replace(r'^\s*$', 0, regex=True)
print('After removing blank values')
print(df_data[df_data['TotalCharges'] == ' '].index)
df_data['TotalCharges'] = df_data['TotalCharges'].astype(float)
# Let's review the data information
df_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5986 entries, 0 to 5985
Data columns (total 20 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 gender 5986 non-null object
1 SeniorCitizen 5986 non-null int64
2 Partner 5986 non-null object
3 Dependents 5986 non-null object
4 tenure 5986 non-null int64
5 PhoneService 5986 non-null object
6 MultipleLines 5986 non-null object
7 InternetService 5986 non-null object
8 OnlineSecurity 5986 non-null object
9 OnlineBackup 5986 non-null object
10 DeviceProtection 5986 non-null object
11 TechSupport 5986 non-null object
12 StreamingTV 5986 non-null object
13 StreamingMovies 5986 non-null object
14 Contract 5986 non-null object
15 PaperlessBilling 5986 non-null object
16 PaymentMethod 5986 non-null object
17 MonthlyCharges 5986 non-null float64
18 TotalCharges 5986 non-null float64
19 Churn 5986 non-null object
dtypes: float64(2), int64(2), object(16)
memory usage: 935.4+ KB
###Markdown
Business Problem to Data Science QuestionsDolores, the client, was expecting to analysze the current data to understand what went wrong and correct. Here are the questions the team came up with. _NOTE_: As we present to the stakeholders, questions willget updated.- How gender, partner, and dependents are related to chrun?- Are we facing churn with customers with longer tenure?- Are we having issues with phone and internet services?- Does customers opted for tech support stayed for longer tenure with less churn?- Did customers monthly charge and total charge relate with churn?- Do customers opted for streaming face issue with the service?- Which contract do customers prefer in order to stay with the business? Let's visualizeWe will be using `Plotly` and `Seaborn` for the visualization pupose. `Pandas` used for analysis. How gender, partner, and dependents are related to chrun?
###Code
fig = px.treemap(df_data.groupby(['gender', 'Partner', 'Dependents','Churn']).size().reset_index(name='count'),
path=['gender', 'Partner', 'Dependents','Churn'], values='count',
color='Churn', title='How gender, partner, and dependents are related to chrun?')
fig.show()
###Output
_____no_output_____
###Markdown
Are we facing churn with customers with longer tenure?
###Code
fig = px.histogram(df_data.groupby(['tenure', 'Churn']).size().reset_index(name='count'),
x="tenure", y='count', color="Churn", marginal="rug", color_discrete_map={"Yes": "#E45756", "No": "#1CBE4F"},
title='Are we facing churn with customers with longer tenure?')
fig.show()
###Output
_____no_output_____
###Markdown
Are we having issues with phone and internet services?
###Code
fig = px.sunburst(df_data.groupby(['Churn', 'PhoneService', 'InternetService']).size().reset_index(name='count'),
path=['Churn', 'PhoneService', 'InternetService'], values='count', title='Are we having issues with phone and internet services?')
fig.show()
###Output
_____no_output_____
###Markdown
Does customers opted for tech support stayed for longer tenure with less churn?
###Code
df_tech_yes = df_data[df_data['TechSupport'] == 'Yes']
df_tech_no = df_data[df_data['TechSupport'] == 'No']
###Output
_____no_output_____
###Markdown
Customers getting tech support
###Code
fig = px.histogram(df_tech_yes.groupby(['tenure', 'Churn']).size().reset_index(name='count'),
x="tenure", y='count', color="Churn", marginal="rug", color_discrete_map={"Yes": "#E45756", "No": "#1CBE4F"},
title='Statistics of customers opted for tech support')
fig.show()
###Output
_____no_output_____
###Markdown
Customers not getting tech support
###Code
fig = px.histogram(df_tech_no.groupby(['tenure', 'Churn']).size().reset_index(name='count'),
x="tenure", y='count', color="Churn", marginal="rug", color_discrete_map={"Yes": "#E45756", "No": "#1CBE4F"},
title='Statistics of customers opted out of the tech support')
fig.show()
###Output
_____no_output_____
###Markdown
Did customers monthly charge and total charge relate with churn?
###Code
sns.set(rc={'figure.figsize':(26,8.27)})
sns.kdeplot(data=df_data, x="MonthlyCharges", hue="Churn", multiple="stack").set(title='Did customers monthly charge and total charge relate with churn?')
sns.set(rc={'figure.figsize':(26,8.27)})
sns.kdeplot(data=df_data, x="TotalCharges", hue="Churn", multiple="stack").set(title='Did customers total charge and total charge relate with churn?')
###Output
_____no_output_____
###Markdown
Do customers opted for streaming, face issue with the service?
###Code
ax = sns.barplot(x="StreamingTV", y="count", hue='Churn',
data=df_data.groupby(['Churn', 'StreamingTV']).size().reset_index(name='count'), palette="Set2").set(title='Streaming TV vs Churn')
ax = sns.barplot(x="StreamingMovies", y="count", hue='Churn',
data=df_data.groupby(['Churn', 'StreamingMovies']).size().reset_index(name='count'),
palette="Set2").set(title='Streaming Movies vs Churn')
###Output
_____no_output_____
###Markdown
Which contract do customers prefer in order to stay with the business?
###Code
fig = px.sunburst(df_data.groupby(['Contract', 'Churn']).size().reset_index(name='count'),
path=['Contract', 'Churn'], values='count', title='Which contract do customers prefer in order to stay with the business?')
fig.show()
###Output
_____no_output_____
###Markdown
Data Pre-processing
###Code
# List of categorical columns
cat_columns = ['gender', 'SeniorCitizen', 'Partner', 'PhoneService',
'MultipleLines', 'InternetService', 'OnlineSecurity',
'OnlineBackup', 'DeviceProtection', 'TechSupport',
'StreamingTV', 'StreamingMovies', 'Contract',
'PaperlessBilling', 'PaymentMethod', 'Dependents']
###Output
_____no_output_____
###Markdown

###Code
# We can really quickly build dummy features with pandas by calling the get_dummies function.
df_processed = pd.get_dummies(df_data, prefix_sep="__",
columns=cat_columns)
df_processed.head()
###Output
_____no_output_____
###Markdown
Now we got the data with one hot encoded feature.
###Code
# Encode target column
# First let's see unique values in the target column
print('Before encoding:', df_processed['Churn'].unique())
# Encode target columns: Assign `Yes` to 1 and `No` to 0
df_processed["Churn"] = np.where(df_processed["Churn"].str.contains("Yes"), 1, 0)
print('After encoding:', df_processed['Churn'].unique())
###Output
Before encoding: ['No' 'Yes']
After encoding: [0 1]
###Markdown
Let's save the data transformation we did before so that we perform the same operation in the test dataset. If there is any drift in the data, we might have to re-train the model.
###Code
cat_dummies = [col for col in df_processed
if "__" in col
and col.split("__")[0] in cat_columns]
with open('cat_dummies.txt', 'w') as filehandle:
for listitem in cat_dummies:
filehandle.write('%s\n' % listitem)
processed_columns = list(df_processed.columns[:])
with open('processed_columns.txt', 'w') as filehandle:
for listitem in processed_columns:
filehandle.write('%s\n' % listitem)
# Looks like the dataset is imbalanced
df_processed['Churn'].value_counts()
###Output
_____no_output_____
###Markdown
Choosing algorithms some tips!- Explainability- Memory: can you load your data fully? need incremental learning algorithms?- Number of features- Nonlinearity of the data- Training speed- Prediction speed How to deal with data imbalance?There are many ways to handle the dta imbalance.- Choose a learning algorithm that provide weights for every class.- Data-level approach: Under-sampling, Over-sampling, Cluster-based over sampling, Synthetic minority over-sampling technique (SMOTE)- Algorithmic ensemble techniques- Bagging techniques- Boosting: Ada boost, Gradient Tree boosting, XG Boost/- https://www.analyticsvidhya.com/blog/2017/03/imbalanced-data-classification/- https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/Here, we are using adaptive boosting technique in this example to deal with data imbalance. An AdaBoost classifier.Ada Boost is the first original boosting technique which creates a highly accurate prediction rule by combining many weak and inaccurate rules. Each classifier is serially trained with the goal of correctly classifying examples in every round that were incorrectly classified in the previous round.
###Code
# Get only features
feature_df = df_processed.drop(['Churn'], axis=1)
# Extract target column
target_df = df_processed[['Churn']]
# Split dataset into train and test (Best Practise is to split into train, validation, and test)
x_train,x_test,y_train,y_test = train_test_split(feature_df, target_df, test_size=0.2, random_state = 0)
# Initialize adaboost classifier
cls = AdaBoostClassifier(n_estimators=100)
# Fit the model
cls.fit(x_train, y_train)
# Predict and calculate metrics
print("Accuracy:", metrics.accuracy_score(y_test, cls.predict(x_test)))
print('Recall Score:', metrics.recall_score(y_test, cls.predict(x_test), average='weighted'))
print('Precision Score:', metrics.precision_score(y_test, cls.predict(x_test), average='weighted'))
print('F1 Score:', metrics.f1_score(y_test, cls.predict(x_test), average='weighted'))
print('Confusion matrix:', metrics.confusion_matrix(y_test, cls.predict(x_test)))
import pickle
# save the classifier
with open('classifier.pkl', 'wb') as fid:
pickle.dump(cls, fid)
###Output
_____no_output_____
###Markdown
Business ProblemA telecommunication company is facing dip in revenue due to customer attrition and was looking for ways to tackle the issue. - Analysis of the current churn data and look for patterns- Possible churn prediction system Why solve using data science?Not all problems needs to be solved using ML/DL techniques!Justification in this use case:- Customer churn does not happen with specific set of factors. Factors may overlap or there many too many resons for the churn.- Scalability: As the organization gets more customers having ML solutions to handle them will be lot better than doing manual analysis.With these justifications, lets get our customers data and understand. Advantages of having a churn prediction systemClient can react in time and retain the customers by making a special offer according to the preference Dataset RecievedTelecom users datasethttps://www.kaggle.com/radmirzosimov/telecom-users-dataset
###Code
!pip install plotly==4.14.3
# Import Libraries
import pandas as pd
import numpy as np
import plotly.express as px
import seaborn as sns
from matplotlib import pyplot
from sklearn.model_selection import train_test_split
from sklearn.ensemble import AdaBoostClassifier
from sklearn import metrics
path_to_file = '/content/drive/MyDrive/AIE/telecom_users.csv'
# Read the input data
df_data = pd.read_csv(path_to_file)
# Take first few rows
df_data.head()
###Output
_____no_output_____
###Markdown
Data Glossary- customerID - customer id- gender - client gender (male / female)- SeniorCitizen - is the client retired (1, 0)- Partner - is the client married (Yes, No)- tenure - how many months a person has been a client of the company- PhoneService - is the telephone service connected (Yes, No)- MultipleLines - are multiple phone lines connected (Yes, No, No phone service)- InternetService - client's Internet service provider (DSL, Fiber optic, No)- OnlineSecurity - is the online security service connected (Yes, No, No internet service)- OnlineBackup - is the online backup service activated (Yes, No, No internet service)- DeviceProtection - does the client have equipment insurance (Yes, No, No internet service)- TechSupport - is the technical support service connected (Yes, No, No internet service)- StreamingTV - is the streaming TV service connected (Yes, No, No internet service)- StreamingMovies - is the streaming cinema service activated (Yes, No, No internet service)- Contract - type of customer contract (Month-to-month, One year, Two year)- PaperlessBilling - whether the client uses paperless billing (Yes, No)- PaymentMethod - payment method (Electronic check, Mailed check, Bank transfer (automatic), Credit card (automatic))- MonthlyCharges - current monthly payment- TotalCharges - the total amount that the client paid for the services for the entire time- Churn - whether there was a churn (Yes or No) Exploratory Data Analysis
###Code
# Remove ids from the analysis
df_data = df_data.drop(columns=['Unnamed: 0', 'customerID'])
df_data.head()
# Let's see of we have any missing values
df_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5986 entries, 0 to 5985
Data columns (total 20 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 gender 5986 non-null object
1 SeniorCitizen 5986 non-null int64
2 Partner 5986 non-null object
3 Dependents 5986 non-null object
4 tenure 5986 non-null int64
5 PhoneService 5986 non-null object
6 MultipleLines 5986 non-null object
7 InternetService 5986 non-null object
8 OnlineSecurity 5986 non-null object
9 OnlineBackup 5986 non-null object
10 DeviceProtection 5986 non-null object
11 TechSupport 5986 non-null object
12 StreamingTV 5986 non-null object
13 StreamingMovies 5986 non-null object
14 Contract 5986 non-null object
15 PaperlessBilling 5986 non-null object
16 PaymentMethod 5986 non-null object
17 MonthlyCharges 5986 non-null float64
18 TotalCharges 5986 non-null object
19 Churn 5986 non-null object
dtypes: float64(1), int64(2), object(17)
memory usage: 935.4+ KB
###Markdown
 From the data gloassary, we can observe that the `TotalCharges` is a number but it is in `object` type. Let's analyze that.
###Code
df_data['TotalCharges'].value_counts()
###Output
_____no_output_____
###Markdown
Looks like there are about 10 blank values in the `TotalCharges` field. Let's update the values.
###Code
# Observe that TotalCharges have blank values
print('Before removing blank values')
print(df_data[df_data['TotalCharges'] == ' '].index)
df_data['TotalCharges'] = df_data['TotalCharges'].replace(r'^\s*$', 0, regex=True)
print('After removing blank values')
print(df_data[df_data['TotalCharges'] == ' '].index)
df_data['TotalCharges'] = df_data['TotalCharges'].astype(float)
# Let's review the data information
df_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5986 entries, 0 to 5985
Data columns (total 20 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 gender 5986 non-null object
1 SeniorCitizen 5986 non-null int64
2 Partner 5986 non-null object
3 Dependents 5986 non-null object
4 tenure 5986 non-null int64
5 PhoneService 5986 non-null object
6 MultipleLines 5986 non-null object
7 InternetService 5986 non-null object
8 OnlineSecurity 5986 non-null object
9 OnlineBackup 5986 non-null object
10 DeviceProtection 5986 non-null object
11 TechSupport 5986 non-null object
12 StreamingTV 5986 non-null object
13 StreamingMovies 5986 non-null object
14 Contract 5986 non-null object
15 PaperlessBilling 5986 non-null object
16 PaymentMethod 5986 non-null object
17 MonthlyCharges 5986 non-null float64
18 TotalCharges 5986 non-null float64
19 Churn 5986 non-null object
dtypes: float64(2), int64(2), object(16)
memory usage: 935.4+ KB
###Markdown
Business Problem to Data Science QuestionsDolores, the client, was expecting to analysze the current data to understand what went wrong and correct. Here are the questions the team came up with. _NOTE_: As we present to the stakeholders, questions willget updated.- How gender, partner, and dependents are related to chrun?- Are we facing churn with customers with longer tenure?- Are we having issues with phone and internet services?- Does customers opted for tech support stayed for longer tenure with less churn?- Did customers monthly charge and total charge relate with churn?- Do customers opted for streaming face issue with the service?- Which contract do customers prefer in order to stay with the business? Let's visualizeWe will be using `Plotly` and `Seaborn` for the visualization pupose. `Pandas` used for analysis. How gender, partner, and dependents are related to chrun?
###Code
fig = px.treemap(df_data.groupby(['gender', 'Partner', 'Dependents','Churn']).size().reset_index(name='count'),
path=['gender', 'Partner', 'Dependents','Churn'], values='count',
color='Churn', title='How gender, partner, and dependents are related to chrun?')
fig.show()
###Output
_____no_output_____
###Markdown
Are we facing churn with customers with longer tenure?
###Code
fig = px.histogram(df_data.groupby(['tenure', 'Churn']).size().reset_index(name='count'),
x="tenure", y='count', color="Churn", marginal="rug", color_discrete_map={"Yes": "#E45756", "No": "#1CBE4F"},
title='Are we facing churn with customers with longer tenure?')
fig.show()
###Output
_____no_output_____
###Markdown
Are we having issues with phone and internet services?
###Code
fig = px.sunburst(df_data.groupby(['Churn', 'PhoneService', 'InternetService']).size().reset_index(name='count'),
path=['Churn', 'PhoneService', 'InternetService'], values='count', title='Are we having issues with phone and internet services?')
fig.show()
###Output
_____no_output_____
###Markdown
Does customers opted for tech support stayed for longer tenure with less churn?
###Code
df_tech_yes = df_data[df_data['TechSupport'] == 'Yes']
df_tech_no = df_data[df_data['TechSupport'] == 'No']
###Output
_____no_output_____
###Markdown
Customers getting tech support
###Code
fig = px.histogram(df_tech_yes.groupby(['tenure', 'Churn']).size().reset_index(name='count'),
x="tenure", y='count', color="Churn", marginal="rug", color_discrete_map={"Yes": "#E45756", "No": "#1CBE4F"},
title='Statistics of customers opted for tech support')
fig.show()
###Output
_____no_output_____
###Markdown
Customers not getting tech support
###Code
fig = px.histogram(df_tech_no.groupby(['tenure', 'Churn']).size().reset_index(name='count'),
x="tenure", y='count', color="Churn", marginal="rug", color_discrete_map={"Yes": "#E45756", "No": "#1CBE4F"},
title='Statistics of customers opted out of the tech support')
fig.show()
###Output
_____no_output_____
###Markdown
Did customers monthly charge and total charge relate with churn?
###Code
sns.set(rc={'figure.figsize':(26,8.27)})
sns.kdeplot(data=df_data, x="MonthlyCharges", hue="Churn", multiple="stack").set(title='Did customers monthly charge and total charge relate with churn?')
sns.set(rc={'figure.figsize':(26,8.27)})
sns.kdeplot(data=df_data, x="TotalCharges", hue="Churn", multiple="stack").set(title='Did customers total charge and total charge relate with churn?')
###Output
_____no_output_____
###Markdown
Do customers opted for streaming, face issue with the service?
###Code
ax = sns.barplot(x="StreamingTV", y="count", hue='Churn',
data=df_data.groupby(['Churn', 'StreamingTV']).size().reset_index(name='count'), palette="Set2").set(title='Streaming TV vs Churn')
ax = sns.barplot(x="StreamingMovies", y="count", hue='Churn',
data=df_data.groupby(['Churn', 'StreamingMovies']).size().reset_index(name='count'),
palette="Set2").set(title='Streaming Movies vs Churn')
###Output
_____no_output_____
###Markdown
Which contract do customers prefer in order to stay with the business?
###Code
fig = px.sunburst(df_data.groupby(['Contract', 'Churn']).size().reset_index(name='count'),
path=['Contract', 'Churn'], values='count', title='Which contract do customers prefer in order to stay with the business?')
fig.show()
###Output
_____no_output_____
###Markdown
Data Pre-processing
###Code
# List of categorical columns
cat_columns = ['gender', 'SeniorCitizen', 'Partner', 'PhoneService',
'MultipleLines', 'InternetService', 'OnlineSecurity',
'OnlineBackup', 'DeviceProtection', 'TechSupport',
'StreamingTV', 'StreamingMovies', 'Contract',
'PaperlessBilling', 'PaymentMethod', 'Dependents']
###Output
_____no_output_____
###Markdown

###Code
# We can really quickly build dummy features with pandas by calling the get_dummies function.
df_processed = pd.get_dummies(df_data, prefix_sep="__",
columns=cat_columns)
df_processed.head()
###Output
_____no_output_____
###Markdown
Now we got the data with one hot encoded feature.
###Code
# Encode target column
# First let's see unique values in the target column
print('Before encoding:', df_processed['Churn'].unique())
# Encode target columns: Assign `Yes` to 1 and `No` to 0
df_processed["Churn"] = np.where(df_processed["Churn"].str.contains("Yes"), 1, 0)
print('After encoding:', df_processed['Churn'].unique())
###Output
Before encoding: ['No' 'Yes']
After encoding: [0 1]
###Markdown
Let's save the data transformation we did before so that we perform the same operation in the test dataset. If there is any drift in the data, we might have to re-train the model.
###Code
cat_dummies = [col for col in df_processed
if "__" in col
and col.split("__")[0] in cat_columns]
with open('cat_dummies.txt', 'w') as filehandle:
for listitem in cat_dummies:
filehandle.write('%s\n' % listitem)
processed_columns = list(df_processed.columns[:])
with open('processed_columns.txt', 'w') as filehandle:
for listitem in processed_columns:
filehandle.write('%s\n' % listitem)
# Looks like the dataset is imbalanced
df_processed['Churn'].value_counts()
###Output
_____no_output_____
###Markdown
Choosing algorithms some tips!- Explainability- Memory: can you load your data fully? need incremental learning algorithms?- Number of features- Nonlinearity of the data- Training speed- Prediction speed How to deal with data imbalance?There are many ways to handle the dta imbalance.- Choose a learning algorithm that provide weights for every class.- Data-level approach: Under-sampling, Over-sampling, Cluster-based over sampling, Synthetic minority over-sampling technique (SMOTE)- Algorithmic ensemble techniques- Bagging techniques- Boosting: Ada boost, Gradient Tree boosting, XG Boost/- https://www.analyticsvidhya.com/blog/2017/03/imbalanced-data-classification/- https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/Here, we are using adaptive boosting technique in this example to deal with data imbalance. An AdaBoost classifier.Ada Boost is the first original boosting technique which creates a highly accurate prediction rule by combining many weak and inaccurate rules. Each classifier is serially trained with the goal of correctly classifying examples in every round that were incorrectly classified in the previous round.
###Code
# Get only features
feature_df = df_processed.drop(['Churn'], axis=1)
# Extract target column
target_df = df_processed[['Churn']]
# Split dataset into train and test (Best Practise is to split into train, validation, and test)
x_train,x_test,y_train,y_test = train_test_split(feature_df, target_df, test_size=0.2, random_state = 0)
# Initialize adaboost classifier
cls = AdaBoostClassifier(n_estimators=100)
# Fit the model
cls.fit(x_train, y_train)
# Predict and calculate metrics
print("Accuracy:", metrics.accuracy_score(y_test, cls.predict(x_test)))
print('Recall Score:', metrics.recall_score(y_test, cls.predict(x_test), average='weighted'))
print('Precision Score:', metrics.precision_score(y_test, cls.predict(x_test), average='weighted'))
print('F1 Score:', metrics.f1_score(y_test, cls.predict(x_test), average='weighted'))
print('Confusion matrix:', metrics.confusion_matrix(y_test, cls.predict(x_test)))
import pickle
# save the classifier
with open('classifier.pkl', 'wb') as fid:
pickle.dump(cls, fid)
###Output
_____no_output_____ |
week03_lm/homework_tf.ipynb | ###Markdown
Homework: going neural (6 pts)We've checked out statistical approaches to language models in the last notebook. Now let's go find out what deep learning has to offer.We're gonna use the same dataset as before, except this time we build a language model that's character-level, not word level. Before you go:* If you haven't done seminar already, use `seminar.ipynb` to download the data.* This homework uses TensorFlow v2.0: this is [how you install it](https://www.tensorflow.org/beta); and that's [how you use it](https://colab.research.google.com/drive/1YtfbZGgzKr7fpBTqkdEQtu4vUALoTv8A).
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Working on character level means that we don't need to deal with large vocabulary or missing words. Heck, we can even keep uppercase words in text! The downside, however, is that all our sequences just got a lot longer.However, we still need special tokens:* Begin Of Sequence (__BOS__) - this token is at the start of each sequence. We use it so that we always have non-empty input to our neural network. $P(x_t) = P(x_1 | BOS)$* End Of Sequence (__EOS__) - you guess it... this token is at the end of each sequence. The catch is that it should __not__ occur anywhere else except at the very end. If our model produces this token, the sequence is over.
###Code
BOS, EOS = ' ', '\n'
data = pd.read_json("./arxivData.json")
lines = data.apply(lambda row: (row['title'] + ' ; ' + row['summary'])[:512], axis=1) \
.apply(lambda line: BOS + line.replace(EOS, ' ') + EOS) \
.tolist()
# if you missed the seminar, download data here - https://yadi.sk/d/_nGyU2IajjR9-w
###Output
_____no_output_____
###Markdown
Our next step is __building char-level vocabulary__. Put simply, you need to assemble a list of all unique tokens in the dataset.
###Code
# get all unique characters from lines (including capital letters and symbols)
tokens = <YOUR CODE>
tokens = sorted(tokens)
n_tokens = len(tokens)
print ('n_tokens = ',n_tokens)
assert 100 < n_tokens < 150
assert BOS in tokens, EOS in tokens
###Output
_____no_output_____
###Markdown
We can now assign each character with it's index in tokens list. This way we can encode a string into a TF-friendly integer vector.
###Code
# dictionary of character -> its identifier (index in tokens list)
token_to_id = <YOUR CODE>
assert len(tokens) == len(token_to_id), "dictionaries must have same size"
for i in range(n_tokens):
assert token_to_id[tokens[i]] == i, "token identifier must be it's position in tokens list"
print("Seems alright!")
###Output
_____no_output_____
###Markdown
Our final step is to assemble several strings in a integet matrix `[batch_size, text_length]`. The only problem is that each sequence has a different length. We can work around that by padding short sequences with extra _EOS_ or cropping long sequences. Here's how it works:
###Code
def to_matrix(lines, max_len=None, pad=token_to_id[EOS], dtype='int32'):
"""Casts a list of lines into tf-digestable matrix"""
max_len = max_len or max(map(len, lines))
lines_ix = np.full([len(lines), max_len], pad, dtype=dtype)
for i in range(len(lines)):
line_ix = list(map(token_to_id.get, lines[i][:max_len]))
lines_ix[i, :len(line_ix)] = line_ix
return lines_ix
#Example: cast 4 random names to matrices, pad with zeros
dummy_lines = [
' abc\n',
' abacaba\n',
' abc1234567890\n',
]
print(to_matrix(dummy_lines))
###Output
_____no_output_____
###Markdown
Neural Language Model (2 points including training)Just like for N-gram LMs, we want to estimate probability of text as a joint probability of tokens (symbols this time).$$P(X) = \prod_t P(x_t \mid x_0, \dots, x_{t-1}).$$ Instead of counting all possible statistics, we want to train a neural network with parameters $\theta$ that estimates the conditional probabilities:$$ P(x_t \mid x_0, \dots, x_{t-1}) \approx p(x_t \mid x_0, \dots, x_{t-1}, \theta) $$But before we optimize, we need to define our neural network. Let's start with a fixed-window (aka convolutional) architecture:
###Code
import tensorflow as tf
keras, L = tf.keras, tf.keras.layers
assert tf.__version__.startswith('2'), "Current tf version: {}; required: 2.0.*".format(tf.__version__)
class FixedWindowLanguageModel(L.Layer):
def __init__(self, n_tokens=n_tokens, emb_size=16, hid_size=64):
"""
A fixed window model that looks on at least 5 previous symbols.
Note: fixed window LM is effectively performing a convolution over a sequence of words.
This convolution only looks on current and previous words.
Such convolution can be represented as a sequence of 2 operations:
- pad input vectors by {strides * (filter_size - 1)} zero vectors on the "left", do not pad right
- perform regular convolution with {filter_size} and {strides}
- If you're absolutely lost, here's a hint: use ZeroPadding1D and Conv1D from keras.layers
You can stack several convolutions at once
"""
super().__init__() # initialize base class to track sub-layers, trainable variables, etc.
#YOUR CODE - create layers/variables and any metadata you want, e.g. self.emb = L.Embedding(...)
<...>
#END OF YOUR CODE
def __call__(self, input_ix):
"""
compute language model logits given input tokens
:param input_ix: batch of sequences with token indices, tf tensor: int32[batch_size, sequence_length]
:returns: pre-softmax linear outputs of language model [batch_size, sequence_length, n_tokens]
these outputs will be used as logits to compute P(x_t | x_0, ..., x_{t - 1})
"""
# YOUR CODE - apply layers, see docstring above
return <...>
def get_possible_next_tokens(self, prefix=BOS, temperature=1.0, max_len=100):
""" :returns: probabilities of next token, dict {token : prob} for all tokens """
prefix_ix = tf.convert_to_tensor(to_matrix([prefix]), tf.int32)
probs = tf.nn.softmax(self(prefix_ix)[0, -1]).numpy() # shape: [n_tokens]
return dict(zip(tokens, probs))
model = FixedWindowLanguageModel()
# note: tensorflow and keras layers create variables only after they're first applied (called)
dummy_input_ix = tf.constant(to_matrix(dummy_lines))
dummy_logits = model(dummy_input_ix)
print('Weights:', tuple(w.name for w in model.trainable_variables))
assert isinstance(dummy_logits, tf.Tensor)
assert dummy_logits.shape == (len(dummy_lines), max(map(len, dummy_lines)), n_tokens), "please check output shape"
assert np.all(np.isfinite(dummy_logits)), "inf/nan encountered"
assert not np.allclose(dummy_logits.numpy().sum(-1), 1), "please predict linear outputs, don't use softmax (maybe you've just got unlucky)"
# test for lookahead
dummy_input_ix_2 = tf.constant(to_matrix([line[:3] + 'e' * (len(line) - 3) for line in dummy_lines]))
dummy_logits_2 = model(dummy_input_ix_2)
assert np.allclose(dummy_logits[:, :3] - dummy_logits_2[:, :3], 0), "your model's predictions depend on FUTURE tokens. " \
" Make sure you don't allow any layers to look ahead of current token." \
" You can also get this error if your model is not deterministic (e.g. dropout). Disable it for this test."
###Output
_____no_output_____
###Markdown
We can now tune our network's parameters to minimize categorical crossentropy over training dataset $D$:$$ L = {\frac1{|D|}} \sum_{X \in D} \sum_{x_i \in X} - \log p(x_t \mid x_1, \dots, x_{t-1}, \theta) $$As usual with with neural nets, this optimization is performed via stochastic gradient descent with backprop. One can also note that minimizing crossentropy is equivalent to minimizing model __perplexity__, KL-divergence or maximizng log-likelihood.
###Code
def compute_lengths(input_ix, eos_ix=token_to_id[EOS]):
""" compute length of each line in input ix (incl. first EOS), int32 vector of shape [batch_size] """
count_eos = tf.cumsum(tf.cast(tf.equal(input_ix, eos_ix), tf.int32), axis=1, exclusive=True)
lengths = tf.reduce_sum(tf.cast(tf.equal(count_eos, 0), tf.int32), axis=1)
return lengths
print('matrix:\n', dummy_input_ix.numpy())
print('lengths:', compute_lengths(dummy_input_ix).numpy())
def compute_loss(model, input_ix):
"""
:param model: language model that can compute next token logits given token indices
:param input ix: int32 matrix of tokens, shape: [batch_size, length]; padded with eos_ix
"""
input_ix = tf.convert_to_tensor(input_ix, dtype=tf.int32)
logits = model(input_ix[:, :-1])
reference_answers = input_ix[:, 1:]
# Your task: implement loss function as per formula above
# your loss should only be computed on actual tokens, excluding padding
# predicting actual tokens and first EOS do count. Subsequent EOS-es don't
# you will likely need to use compute_lengths and/or tf.sequence_mask to get it right.
<YOUR CODE>
return <YOUR CODE: return scalar loss>
loss_1 = compute_loss(model, to_matrix(dummy_lines, max_len=50))
loss_2 = compute_loss(model, to_matrix(dummy_lines, max_len=100))
assert (np.ndim(loss_1) == 0) and (0 < loss_1 < 100), "loss must be a positive scalar"
assert np.allclose(loss_1, loss_2), 'do not include AFTER first EOS into loss. '\
'Hint: use tf.sequence_mask. Beware +/-1 errors. And be careful when averaging!'
###Output
_____no_output_____
###Markdown
EvaluationYou will need two functions: one to compute test loss and another to generate samples. For your convenience, we implemented them both in your stead.
###Code
def score_lines(model, dev_lines, batch_size):
""" computes average loss over the entire dataset """
dev_loss_num, dev_loss_len = 0., 0.
for i in range(0, len(dev_lines), batch_size):
batch_ix = to_matrix(dev_lines[i: i + batch_size])
dev_loss_num += compute_loss(model, batch_ix) * len(batch_ix)
dev_loss_len += len(batch_ix)
return dev_loss_num / dev_loss_len
def generate(model, prefix=BOS, temperature=1.0, max_len=100):
"""
Samples output sequence from probability distribution obtained by model
:param temperature: samples proportionally to model probabilities ^ temperature
if temperature == 0, always takes most likely token. Break ties arbitrarily.
"""
while True:
token_probs = model.get_possible_next_tokens(prefix)
tokens, probs = zip(*token_probs.items())
if temperature == 0:
next_token = tokens[np.argmax(probs)]
else:
probs = np.array([p ** (1. / temperature) for p in probs])
probs /= sum(probs)
next_token = np.random.choice(tokens, p=probs)
prefix += next_token
if next_token == EOS or len(prefix) > max_len: break
return prefix
###Output
_____no_output_____
###Markdown
Training loopFinally, let's train our model on minibatches of data
###Code
from sklearn.model_selection import train_test_split
train_lines, dev_lines = train_test_split(lines, test_size=0.25, random_state=42)
batch_size = 256
score_dev_every = 250
train_history, dev_history = [], []
optimizer = keras.optimizers.Adam()
# score untrained model
dev_history.append((0, score_lines(model, dev_lines, batch_size)))
print("Sample before training:", generate(model, 'Bridging'))
from IPython.display import clear_output
from random import sample
from tqdm import trange
for i in trange(len(train_history), 5000):
batch = to_matrix(sample(train_lines, batch_size))
with tf.GradientTape() as tape:
loss_i = compute_loss(model, batch)
grads = tape.gradient(loss_i, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_history.append((i, loss_i.numpy()))
if (i + 1) % 50 == 0:
clear_output(True)
plt.scatter(*zip(*train_history), alpha=0.1, label='train_loss')
if len(dev_history):
plt.plot(*zip(*dev_history), color='red', label='dev_loss')
plt.legend(); plt.grid(); plt.show()
print("Generated examples (tau=0.5):")
for _ in range(3):
print(generate(model, temperature=0.5))
if (i + 1) % score_dev_every == 0:
print("Scoring dev...")
dev_history.append((i, score_lines(model, dev_lines, batch_size)))
print('#%i Dev loss: %.3f' % dev_history[-1])
assert np.mean(train_history[:10], axis=0)[1] > np.mean(train_history[-10:], axis=0)[1], "The model didn't converge."
print("Final dev loss:", dev_history[-1][-1])
for i in range(10):
print(generate(model, temperature=0.5))
###Output
_____no_output_____
###Markdown
RNN Language Models (3 points including training)Fixed-size architectures are reasonably good when capturing short-term dependencies, but their design prevents them from capturing any signal outside their window. We can mitigate this problem by using a __recurrent neural network__:$$ h_0 = \vec 0 ; \quad h_{t+1} = RNN(x_t, h_t) $$$$ p(x_t \mid x_0, \dots, x_{t-1}, \theta) = dense_{softmax}(h_{t-1}) $$Such model processes one token at a time, left to right, and maintains a hidden state vector between them. Theoretically, it can learn arbitrarily long temporal dependencies given large enough hidden size.
###Code
class RNNLanguageModel(L.Layer):
def __init__(self, n_tokens=n_tokens, emb_size=16, hid_size=256):
"""
Build a recurrent language model.
You are free to choose anything you want, but the recommended architecture is
- token embeddings
- one or more LSTM/GRU layers with hid size
- linear layer to predict logits
"""
super().__init__() # initialize base class to track sub-layers, trainable variables, etc.
# YOUR CODE - create layers/variables/etc
<...>
#END OF YOUR CODE
def __call__(self, input_ix):
"""
compute language model logits given input tokens
:param input_ix: batch of sequences with token indices, tf tensor: int32[batch_size, sequence_length]
:returns: pre-softmax linear outputs of language model [batch_size, sequence_length, n_tokens]
these outputs will be used as logits to compute P(x_t | x_0, ..., x_{t - 1})
"""
#YOUR CODE
return <...>
def get_possible_next_tokens(self, prefix=BOS, temperature=1.0, max_len=100):
""" :returns: probabilities of next token, dict {token : prob} for all tokens """
prefix_ix = tf.convert_to_tensor(to_matrix([prefix]), tf.int32)
probs = tf.nn.softmax(self(prefix_ix)[0, -1]).numpy() # shape: [n_tokens]
return dict(zip(tokens, probs))
model = RNNLanguageModel()
# note: tensorflow and keras layers create variables only after they're first applied (called)
dummy_input_ix = tf.constant(to_matrix(dummy_lines))
dummy_logits = model(dummy_input_ix)
assert isinstance(dummy_logits, tf.Tensor)
assert dummy_logits.shape == (len(dummy_lines), max(map(len, dummy_lines)), n_tokens), "please check output shape"
assert np.all(np.isfinite(dummy_logits)), "inf/nan encountered"
assert not np.allclose(dummy_logits.numpy().sum(-1), 1), "please predict linear outputs, don't use softmax (maybe you've just got unlucky)"
print('Weights:', tuple(w.name for w in model.trainable_variables))
# test for lookahead
dummy_input_ix_2 = tf.constant(to_matrix([line[:3] + 'e' * (len(line) - 3) for line in dummy_lines]))
dummy_logits_2 = model(dummy_input_ix_2)
assert np.allclose(dummy_logits[:, :3] - dummy_logits_2[:, :3], 0), "your model's predictions depend on FUTURE tokens. " \
" Make sure you don't allow any layers to look ahead of current token." \
" You can also get this error if your model is not deterministic (e.g. dropout). Disable it for this test."
###Output
_____no_output_____
###Markdown
RNN trainingOur RNN language model should optimize the same loss function as fixed-window model. But there's a catch. Since RNN recurrently multiplies gradients through many time-steps, gradient values may explode, [ruining](https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/nan.jpg) your model.The common solution to that problem is to clip gradients either [individually](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/clip_by_value) or [globally](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/clip_by_global_norm).Your task here is to prepare tensorflow graph that would minimize the same loss function. If you encounter large loss fluctuations during training, please add gradient clipping using urls above._Note: gradient clipping is not exclusive to RNNs. Convolutional networks with enough depth often suffer from the same issue._
###Code
batch_size = 64 # <-- please tune batch size to fit your CPU/GPU configuration
score_dev_every = 250
train_history, dev_history = [], []
optimizer = keras.optimizers.Adam()
# score untrained model
dev_history.append((0, score_lines(model, dev_lines, batch_size)))
print("Sample before training:", generate(model, 'Bridging'))
for i in trange(len(train_history), 5000):
batch = to_matrix(sample(train_lines, batch_size))
with tf.GradientTape() as tape:
loss_i = compute_loss(model, batch)
grads = tape.gradient(loss_i, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_history.append((i, loss_i.numpy()))
if (i + 1) % 50 == 0:
clear_output(True)
plt.scatter(*zip(*train_history), alpha=0.1, label='train_loss')
if len(dev_history):
plt.plot(*zip(*dev_history), color='red', label='dev_loss')
plt.legend(); plt.grid(); plt.show()
print("Generated examples (tau=0.5):")
for _ in range(3):
print(generate(model, temperature=0.5))
if (i + 1) % score_dev_every == 0:
print("Scoring dev...")
dev_history.append((i, score_lines(model, dev_lines, batch_size)))
print('#%i Dev loss: %.3f' % dev_history[-1])
assert np.mean(train_history[:10], axis=0)[1] > np.mean(train_history[-10:], axis=0)[1], "The model didn't converge."
print("Final dev loss:", dev_history[-1][-1])
for i in range(10):
print(generate(model, temperature=0.5))
###Output
_____no_output_____
###Markdown
Alternative sampling strategies (1 point)So far we've sampled tokens from the model in proportion with their probability.However, this approach can sometimes generate nonsense words due to the fact that softmax probabilities of these words are never exactly zero. This issue can be somewhat mitigated with sampling temperature, but low temperature harms sampling diversity. Can we remove the nonsense words without sacrificing diversity? __Yes, we can!__ But it takes a different sampling strategy.__Top-k sampling:__ on each step, sample the next token from __k most likely__ candidates from the language model.Suppose $k=3$ and the token probabilities are $p=[0.1, 0.35, 0.05, 0.2, 0.3]$. You first need to select $k$ most likely words and set the probability of the rest to zero: $\hat p=[0.0, 0.35, 0.0, 0.2, 0.3]$ and re-normalize: $p^*\approx[0.0, 0.412, 0.0, 0.235, 0.353]$.__Nucleus sampling:__ similar to top-k sampling, but this time we select $k$ dynamically. In nucleous sampling, we sample from top-__N%__ fraction of the probability mass.Using the same $p=[0.1, 0.35, 0.05, 0.2, 0.3]$ and nucleous N=0.9, the nucleous words consist of:1. most likely token $w_2$, because $p(w_2) < N$2. second most likely token $w_5$, $p(w_2) + p(w_5) = 0.65 < N$3. third most likely token $w_4$ because $p(w_2) + p(w_5) + p(w_4) = 0.85 < N$And thats it, because the next most likely word would overflow: $p(w_2) + p(w_5) + p(w_4) + p(w_1) = 0.95 > N$.After you've selected the nucleous words, you need to re-normalize them as in top-k sampling and generate the next token.__Your task__ is to implement nucleus sampling variant and see if its any good.
###Code
def generate_nucleus(model, prefix=BOS, nucleus=0.9, max_len=100):
"""
Generate a sequence with nucleous sampling
:param prefix: a string containing space-separated previous tokens
:param nucleus: N from the formulae above, N \in [0, 1]
:param max_len: generate sequences with at most this many tokens, including prefix
:note: make sure that nucleous always contains at least one word, even if p(w*) > nucleus
"""
while True:
token_probs = model.get_possible_next_tokens(prefix)
tokens, probs = zip(*token_probs.items())
<YOUR CODE HERE>
prefix += <YOUR CODE>
if next_token == EOS or len(prefix) > max_len: break
return prefix
for i in range(10):
print(generate_nucleous(model, nucleous_size=PLAY_WITH_ME_SENPAI))
###Output
_____no_output_____
###Markdown
Bonus quest I: Beam Search (2 pts incl. samples)At times, you don't really want the model to generate diverse outputs as much as you want a __single most likely hypothesis.__ A single best translation, most likely continuation of the search query given prefix, etc. Except, you can't get it. In order to find the exact most likely sequence containing 10 tokens, you would need to enumerate all $|V|^{10}$ possible hypotheses. In practice, 9 times out of 10 you will instead find an approximate most likely output using __beam search__.Here's how it works:0. Initial `beam` = [prefix], max beam_size = k1. for T steps:2. ` ... ` generate all possible next tokens for all hypotheses in beam, formulate `len(beam) * len(vocab)` candidates3. ` ... ` select beam_size best for all candidates as new `beam`4. Select best hypothesis (-es?) from beam
###Code
from IPython.display import HTML
# Here's what it looks like:
!wget -q https://raw.githubusercontent.com/yandexdataschool/nlp_course/2020/resources/beam_search.html
HTML("beam_search.html")
def generate_beamsearch(model, prefix=BOS, beam_size=4, length=5):
"""
Generate a sequence with nucleous sampling
:param prefix: a string containing space-separated previous tokens
:param nucleus: N from the formulae above, N \in [0, 1]
:param length: generate sequences with at most this many tokens, NOT INCLUDING PREFIX
:returns: beam_size most likely candidates
:note: make sure that nucleous always contains at least one word, even if p(w*) > nucleus
"""
<YOUR CODE HERE>
return <most likely sequence>
generate_beamsearch(model, prefix=' deep ', beam_size=4)
# check it out: which beam size works best?
# find at least 5 prefixes where beam_size=1 and 8 generates different sequences
###Output
_____no_output_____
###Markdown
Homework: going neural (6 pts)We've checked out statistical approaches to language models in the last notebook. Now let's go find out what deep learning has to offer.We're gonna use the same dataset as before, except this time we build a language model that's character-level, not word level. Before you go:* If you haven't done seminar already, use `seminar.ipynb` to download the data.* This homework uses TensorFlow v2.0: this is [how you install it](https://www.tensorflow.org/beta); and that's [how you use it](https://colab.research.google.com/drive/1YtfbZGgzKr7fpBTqkdEQtu4vUALoTv8A).
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Working on character level means that we don't need to deal with large vocabulary or missing words. Heck, we can even keep uppercase words in text! The downside, however, is that all our sequences just got a lot longer.However, we still need special tokens:* Begin Of Sequence (__BOS__) - this token is at the start of each sequence. We use it so that we always have non-empty input to our neural network. $P(x_t) = P(x_1 | BOS)$* End Of Sequence (__EOS__) - you guess it... this token is at the end of each sequence. The catch is that it should __not__ occur anywhere else except at the very end. If our model produces this token, the sequence is over.
###Code
BOS, EOS = ' ', '\n'
data = pd.read_json("./arxivData.json")
lines = data.apply(lambda row: (row['title'] + ' ; ' + row['summary'])[:512], axis=1) \
.apply(lambda line: BOS + line.replace(EOS, ' ') + EOS) \
.tolist()
# if you missed the seminar, download data here - https://yadi.sk/d/_nGyU2IajjR9-w
###Output
_____no_output_____
###Markdown
Our next step is __building char-level vocabulary__. Put simply, you need to assemble a list of all unique tokens in the dataset.
###Code
# get all unique characters from lines (including capital letters and symbols)
tokens = <YOUR CODE>
tokens = sorted(tokens)
n_tokens = len(tokens)
print ('n_tokens = ',n_tokens)
assert 100 < n_tokens < 150
assert BOS in tokens, EOS in tokens
###Output
_____no_output_____
###Markdown
We can now assign each character with it's index in tokens list. This way we can encode a string into a TF-friendly integer vector.
###Code
# dictionary of character -> its identifier (index in tokens list)
token_to_id = <YOUR CODE>
assert len(tokens) == len(token_to_id), "dictionaries must have same size"
for i in range(n_tokens):
assert token_to_id[tokens[i]] == i, "token identifier must be it's position in tokens list"
print("Seems alright!")
###Output
_____no_output_____
###Markdown
Our final step is to assemble several strings in a integet matrix `[batch_size, text_length]`. The only problem is that each sequence has a different length. We can work around that by padding short sequences with extra _EOS_ or cropping long sequences. Here's how it works:
###Code
def to_matrix(lines, max_len=None, pad=token_to_id[EOS], dtype='int32'):
"""Casts a list of lines into tf-digestable matrix"""
max_len = max_len or max(map(len, lines))
lines_ix = np.full([len(lines), max_len], pad, dtype=dtype)
for i in range(len(lines)):
line_ix = list(map(token_to_id.get, lines[i][:max_len]))
lines_ix[i, :len(line_ix)] = line_ix
return lines_ix
#Example: cast 4 random names to matrices, pad with zeros
dummy_lines = [
' abc\n',
' abacaba\n',
' abc1234567890\n',
]
print(to_matrix(dummy_lines))
###Output
_____no_output_____
###Markdown
Neural Language Model (2 points including training)Just like for N-gram LMs, we want to estimate probability of text as a joint probability of tokens (symbols this time).$$P(X) = \prod_t P(x_t \mid x_0, \dots, x_{t-1}).$$ Instead of counting all possible statistics, we want to train a neural network with parameters $\theta$ that estimates the conditional probabilities:$$ P(x_t \mid x_0, \dots, x_{t-1}) \approx p(x_t \mid x_0, \dots, x_{t-1}, \theta) $$But before we optimize, we need to define our neural network. Let's start with a fixed-window (aka convolutional) architecture:
###Code
import tensorflow as tf
keras, L = tf.keras, tf.keras.layers
assert tf.__version__.startswith('2'), "Current tf version: {}; required: 2.0.*".format(tf.__version__)
class FixedWindowLanguageModel(L.Layer):
def __init__(self, n_tokens=n_tokens, emb_size=16, hid_size=64):
"""
A fixed window model that looks on at least 5 previous symbols.
Note: fixed window LM is effectively performing a convolution over a sequence of words.
This convolution only looks on current and previous words.
Such convolution can be represented as a sequence of 2 operations:
- pad input vectors by {strides * (filter_size - 1)} zero vectors on the "left", do not pad right
- perform regular convolution with {filter_size} and {strides}
- If you're absolutely lost, here's a hint: use ZeroPadding1D and Conv1D from keras.layers
You can stack several convolutions at once
"""
super().__init__() # initialize base class to track sub-layers, trainable variables, etc.
#YOUR CODE - create layers/variables and any metadata you want, e.g. self.emb = L.Embedding(...)
<...>
#END OF YOUR CODE
def __call__(self, input_ix):
"""
compute language model logits given input tokens
:param input_ix: batch of sequences with token indices, tf tensor: int32[batch_size, sequence_length]
:returns: pre-softmax linear outputs of language model [batch_size, sequence_length, n_tokens]
these outputs will be used as logits to compute P(x_t | x_0, ..., x_{t - 1})
"""
# YOUR CODE - apply layers, see docstring above
return <...>
def get_possible_next_tokens(self, prefix=BOS, temperature=1.0, max_len=100):
""" :returns: probabilities of next token, dict {token : prob} for all tokens """
prefix_ix = tf.convert_to_tensor(to_matrix([prefix]), tf.int32)
probs = tf.nn.softmax(self(prefix_ix)[0, -1]).numpy() # shape: [n_tokens]
return dict(zip(tokens, probs))
model = FixedWindowLanguageModel()
# note: tensorflow and keras layers create variables only after they're first applied (called)
dummy_input_ix = tf.constant(to_matrix(dummy_lines))
dummy_logits = model(dummy_input_ix)
print('Weights:', tuple(w.name for w in model.trainable_variables))
assert isinstance(dummy_logits, tf.Tensor)
assert dummy_logits.shape == (len(dummy_lines), max(map(len, dummy_lines)), n_tokens), "please check output shape"
assert np.all(np.isfinite(dummy_logits)), "inf/nan encountered"
assert not np.allclose(dummy_logits.numpy().sum(-1), 1), "please predict linear outputs, don't use softmax (maybe you've just got unlucky)"
# test for lookahead
dummy_input_ix_2 = tf.constant(to_matrix([line[:3] + 'e' * (len(line) - 3) for line in dummy_lines]))
dummy_logits_2 = model(dummy_input_ix_2)
assert np.allclose(dummy_logits[:, :3] - dummy_logits_2[:, :3], 0), "your model's predictions depend on FUTURE tokens. " \
" Make sure you don't allow any layers to look ahead of current token." \
" You can also get this error if your model is not deterministic (e.g. dropout). Disable it for this test."
###Output
_____no_output_____
###Markdown
We can now tune our network's parameters to minimize categorical crossentropy over training dataset $D$:$$ L = {\frac1{|D|}} \sum_{X \in D} \sum_{x_i \in X} - \log p(x_t \mid x_1, \dots, x_{t-1}, \theta) $$As usual with with neural nets, this optimization is performed via stochastic gradient descent with backprop. One can also note that minimizing crossentropy is equivalent to minimizing model __perplexity__, KL-divergence or maximizng log-likelihood.
###Code
def compute_lengths(input_ix, eos_ix=token_to_id[EOS]):
""" compute length of each line in input ix (incl. first EOS), int32 vector of shape [batch_size] """
count_eos = tf.cumsum(tf.cast(tf.equal(input_ix, eos_ix), tf.int32), axis=1, exclusive=True)
lengths = tf.reduce_sum(tf.cast(tf.equal(count_eos, 0), tf.int32), axis=1)
return lengths
print('matrix:\n', dummy_input_ix.numpy())
print('lengths:', compute_lengths(dummy_input_ix).numpy())
def compute_loss(model, input_ix):
"""
:param model: language model that can compute next token logits given token indices
:param input ix: int32 matrix of tokens, shape: [batch_size, length]; padded with eos_ix
"""
input_ix = tf.convert_to_tensor(input_ix, dtype=tf.int32)
logits = model(input_ix[:, :-1])
reference_answers = input_ix[:, 1:]
# Your task: implement loss function as per formula above
# your loss should only be computed on actual tokens, excluding padding
# predicting actual tokens and first EOS do count. Subsequent EOS-es don't
# you will likely need to use compute_lengths and/or tf.sequence_mask to get it right.
<YOUR CODE>
return <YOUR CODE: return scalar loss>
loss_1 = compute_loss(model, to_matrix(dummy_lines, max_len=50))
loss_2 = compute_loss(model, to_matrix(dummy_lines, max_len=100))
assert (np.ndim(loss_1) == 0) and (0 < loss_1 < 100), "loss must be a positive scalar"
assert np.allclose(loss_1, loss_2), 'do not include AFTER first EOS into loss. '\
'Hint: use tf.sequence_mask. Beware +/-1 errors. And be careful when averaging!'
###Output
_____no_output_____
###Markdown
EvaluationYou will need two functions: one to compute test loss and another to generate samples. For your convenience, we implemented them both in your stead.
###Code
def score_lines(model, dev_lines, batch_size):
""" computes average loss over the entire dataset """
dev_loss_num, dev_loss_len = 0., 0.
for i in range(0, len(dev_lines), batch_size):
batch_ix = to_matrix(dev_lines[i: i + batch_size])
dev_loss_num += compute_loss(model, batch_ix) * len(batch_ix)
dev_loss_len += len(batch_ix)
return dev_loss_num / dev_loss_len
def generate(model, prefix=BOS, temperature=1.0, max_len=100):
"""
Samples output sequence from probability distribution obtained by model
:param temperature: samples proportionally to model probabilities ^ temperature
if temperature == 0, always takes most likely token. Break ties arbitrarily.
"""
while True:
token_probs = model.get_possible_next_tokens(prefix)
tokens, probs = zip(*token_probs.items())
if temperature == 0:
next_token = tokens[np.argmax(probs)]
else:
probs = np.array([p ** (1. / temperature) for p in probs])
probs /= sum(probs)
next_token = np.random.choice(tokens, p=probs)
prefix += next_token
if next_token == EOS or len(prefix) > max_len: break
return prefix
###Output
_____no_output_____
###Markdown
Training loopFinally, let's train our model on minibatches of data
###Code
from sklearn.model_selection import train_test_split
train_lines, dev_lines = train_test_split(lines, test_size=0.25, random_state=42)
batch_size = 256
score_dev_every = 250
train_history, dev_history = [], []
optimizer = keras.optimizers.Adam()
# score untrained model
dev_history.append((0, score_lines(model, dev_lines, batch_size)))
print("Sample before training:", generate(model, 'Bridging'))
from IPython.display import clear_output
from random import sample
from tqdm import trange
for i in trange(len(train_history), 5000):
batch = to_matrix(sample(train_lines, batch_size))
with tf.GradientTape() as tape:
loss_i = compute_loss(model, batch)
grads = tape.gradient(loss_i, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_history.append((i, loss_i.numpy()))
if (i + 1) % 50 == 0:
clear_output(True)
plt.scatter(*zip(*train_history), alpha=0.1, label='train_loss')
if len(dev_history):
plt.plot(*zip(*dev_history), color='red', label='dev_loss')
plt.legend(); plt.grid(); plt.show()
print("Generated examples (tau=0.5):")
for _ in range(3):
print(generate(model, temperature=0.5))
if (i + 1) % score_dev_every == 0:
print("Scoring dev...")
dev_history.append((i, score_lines(model, dev_lines, batch_size)))
print('#%i Dev loss: %.3f' % dev_history[-1])
assert np.mean(train_history[:10], axis=0)[1] > np.mean(train_history[-10:], axis=0)[1], "The model didn't converge."
print("Final dev loss:", dev_history[-1][-1])
for i in range(10):
print(generate(model, temperature=0.5))
###Output
_____no_output_____
###Markdown
RNN Language Models (3 points including training)Fixed-size architectures are reasonably good when capturing short-term dependencies, but their design prevents them from capturing any signal outside their window. We can mitigate this problem by using a __recurrent neural network__:$$ h_0 = \vec 0 ; \quad h_{t+1} = RNN(x_t, h_t) $$$$ p(x_t \mid x_0, \dots, x_{t-1}, \theta) = dense_{softmax}(h_{t-1}) $$Such model processes one token at a time, left to right, and maintains a hidden state vector between them. Theoretically, it can learn arbitrarily long temporal dependencies given large enough hidden size.
###Code
class RNNLanguageModel(L.Layer):
def __init__(self, n_tokens=n_tokens, emb_size=16, hid_size=256):
"""
Build a recurrent language model.
You are free to choose anything you want, but the recommended architecture is
- token embeddings
- one or more LSTM/GRU layers with hid size
- linear layer to predict logits
"""
super().__init__() # initialize base class to track sub-layers, trainable variables, etc.
# YOUR CODE - create layers/variables/etc
<...>
#END OF YOUR CODE
def __call__(self, input_ix):
"""
compute language model logits given input tokens
:param input_ix: batch of sequences with token indices, tf tensor: int32[batch_size, sequence_length]
:returns: pre-softmax linear outputs of language model [batch_size, sequence_length, n_tokens]
these outputs will be used as logits to compute P(x_t | x_0, ..., x_{t - 1})
"""
#YOUR CODE
return <...>
def get_possible_next_tokens(self, prefix=BOS, temperature=1.0, max_len=100):
""" :returns: probabilities of next token, dict {token : prob} for all tokens """
prefix_ix = tf.convert_to_tensor(to_matrix([prefix]), tf.int32)
probs = tf.nn.softmax(self(prefix_ix)[0, -1]).numpy() # shape: [n_tokens]
return dict(zip(tokens, probs))
model = RNNLanguageModel()
# note: tensorflow and keras layers create variables only after they're first applied (called)
dummy_input_ix = tf.constant(to_matrix(dummy_lines))
dummy_logits = model(dummy_input_ix)
assert isinstance(dummy_logits, tf.Tensor)
assert dummy_logits.shape == (len(dummy_lines), max(map(len, dummy_lines)), n_tokens), "please check output shape"
assert np.all(np.isfinite(dummy_logits)), "inf/nan encountered"
assert not np.allclose(dummy_logits.numpy().sum(-1), 1), "please predict linear outputs, don't use softmax (maybe you've just got unlucky)"
print('Weights:', tuple(w.name for w in model.trainable_variables))
# test for lookahead
dummy_input_ix_2 = tf.constant(to_matrix([line[:3] + 'e' * (len(line) - 3) for line in dummy_lines]))
dummy_logits_2 = model(dummy_input_ix_2)
assert np.allclose(dummy_logits[:, :3] - dummy_logits_2[:, :3], 0), "your model's predictions depend on FUTURE tokens. " \
" Make sure you don't allow any layers to look ahead of current token." \
" You can also get this error if your model is not deterministic (e.g. dropout). Disable it for this test."
###Output
_____no_output_____
###Markdown
RNN trainingOur RNN language model should optimize the same loss function as fixed-window model. But there's a catch. Since RNN recurrently multiplies gradients through many time-steps, gradient values may explode, [ruining](https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/nan.jpg) your model.The common solution to that problem is to clip gradients either [individually](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/clip_by_value) or [globally](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/clip_by_global_norm).Your task here is to prepare tensorflow graph that would minimize the same loss function. If you encounter large loss fluctuations during training, please add gradient clipping using urls above._Note: gradient clipping is not exclusive to RNNs. Convolutional networks with enough depth often suffer from the same issue._
###Code
batch_size = 64 # <-- please tune batch size to fit your CPU/GPU configuration
score_dev_every = 250
train_history, dev_history = [], []
optimizer = keras.optimizers.Adam()
# score untrained model
dev_history.append((0, score_lines(model, dev_lines, batch_size)))
print("Sample before training:", generate(model, 'Bridging'))
for i in trange(len(train_history), 5000):
batch = to_matrix(sample(train_lines, batch_size))
with tf.GradientTape() as tape:
loss_i = compute_loss(model, batch)
grads = tape.gradient(loss_i, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_history.append((i, loss_i.numpy()))
if (i + 1) % 50 == 0:
clear_output(True)
plt.scatter(*zip(*train_history), alpha=0.1, label='train_loss')
if len(dev_history):
plt.plot(*zip(*dev_history), color='red', label='dev_loss')
plt.legend(); plt.grid(); plt.show()
print("Generated examples (tau=0.5):")
for _ in range(3):
print(generate(model, temperature=0.5))
if (i + 1) % score_dev_every == 0:
print("Scoring dev...")
dev_history.append((i, score_lines(model, dev_lines, batch_size)))
print('#%i Dev loss: %.3f' % dev_history[-1])
assert np.mean(train_history[:10], axis=0)[1] > np.mean(train_history[-10:], axis=0)[1], "The model didn't converge."
print("Final dev loss:", dev_history[-1][-1])
for i in range(10):
print(generate(model, temperature=0.5))
###Output
_____no_output_____
###Markdown
Alternative sampling strategies (1 point)So far we've sampled tokens from the model in proportion with their probability.However, this approach can sometimes generate nonsense words due to the fact that softmax probabilities of these words are never exactly zero. This issue can be somewhat mitigated with sampling temperature, but low temperature harms sampling diversity. Can we remove the nonsense words without sacrificing diversity? __Yes, we can!__ But it takes a different sampling strategy.__Top-k sampling:__ on each step, sample the next token from __k most likely__ candidates from the language model.Suppose $k=3$ and the token probabilities are $p=[0.1, 0.35, 0.05, 0.2, 0.3]$. You first need to select $k$ most likely words and set the probability of the rest to zero: $\hat p=[0.0, 0.35, 0.0, 0.2, 0.3]$ and re-normalize: $p^*\approx[0.0, 0.412, 0.0, 0.235, 0.353]$.__Nucleus sampling:__ similar to top-k sampling, but this time we select $k$ dynamically. In nucleous sampling, we sample from top-__N%__ fraction of the probability mass.Using the same $p=[0.1, 0.35, 0.05, 0.2, 0.3]$ and nucleous N=0.9, the nucleous words consist of:1. most likely token $w_2$, because $p(w_2) < N$2. second most likely token $w_5$, $p(w_2) + p(w_5) = 0.65 < N$3. third most likely token $w_4$ because $p(w_2) + p(w_5) + p(w_4) = 0.85 < N$And thats it, because the next most likely word would overflow: $p(w_2) + p(w_5) + p(w_4) + p(w_1) = 0.95 > N$.After you've selected the nucleous words, you need to re-normalize them as in top-k sampling and generate the next token.__Your task__ is to implement nucleus sampling variant and see if its any good.
###Code
def generate_nucleus(model, prefix=BOS, nucleus=0.9, max_len=100):
"""
Generate a sequence with nucleous sampling
:param prefix: a string containing space-separated previous tokens
:param nucleus: N from the formulae above, N \in [0, 1]
:param max_len: generate sequences with at most this many tokens, including prefix
:note: make sure that nucleous always contains at least one word, even if p(w*) > nucleus
"""
while True:
token_probs = model.get_possible_next_tokens(prefix)
tokens, probs = zip(*token_probs.items())
<YOUR CODE HERE>
prefix += <YOUR CODE>
if next_token == EOS or len(prefix) > max_len: break
return prefix
for i in range(10):
print(generate_nucleous(model, nucleous_size=PLAY_WITH_ME_SENPAI))
###Output
_____no_output_____
###Markdown
Bonus quest I: Beam Search (2 pts incl. samples)At times, you don't really want the model to generate diverse outputs as much as you want a __single most likely hypothesis.__ A single best translation, most likely continuation of the search query given prefix, etc. Except, you can't get it. In order to find the exact most likely sequence containing 10 tokens, you would need to enumerate all $|V|^{10}$ possible hypotheses. In practice, 9 times out of 10 you will instead find an approximate most likely output using __beam search__.Here's how it works:0. Initial `beam` = [prefix], max beam_size = k1. for T steps:2. ` ... ` generate all possible next tokens for all hypotheses in beam, formulate `len(beam) * len(vocab)` candidates3. ` ... ` select beam_size best for all candidates as new `beam`4. Select best hypothesis from beam
###Code
from IPython.display import HTML
# Here's what it looks like:
!wget -q https://raw.githubusercontent.com/yandexdataschool/nlp_course/2020/resources/beam_search.html
HTML("beam_search.html")
def generate_beamsearch(model, prefix=BOS, beam_size=4, length=5):
"""
Generate a sequence with nucleous sampling
:param prefix: a string containing space-separated previous tokens
:param nucleus: N from the formulae above, N \in [0, 1]
:param length: generate sequences with at most this many tokens, NOT INCLUDING PREFIX
:note: make sure that nucleous always contains at least one word, even if p(w*) > nucleus
"""
<YOUR CODE HERE>
return <most likely sequence>
generate_beamsearch(model, prefix=' deep ', beam_size=4)
# check it out: which beam size works best?
# find at least 5 prefixes where beam_size=1 and 8 generates different sequences
###Output
_____no_output_____
###Markdown
Homework: going neural (6 pts)We've checked out statistical approaches to language models in the last notebook. Now let's go find out what deep learning has to offer.We're gonna use the same dataset as before, except this time we build a language model that's character-level, not word level. Before you go:* If you haven't done seminar already, use `seminar.ipynb` to download the data.* This homework uses TensorFlow v2.0: this is [how you install it](https://www.tensorflow.org/beta); and that's [how you use it](https://colab.research.google.com/drive/1YtfbZGgzKr7fpBTqkdEQtu4vUALoTv8A).
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Working on character level means that we don't need to deal with large vocabulary or missing words. Heck, we can even keep uppercase words in text! The downside, however, is that all our sequences just got a lot longer.However, we still need special tokens:* Begin Of Sequence (__BOS__) - this token is at the start of each sequence. We use it so that we always have non-empty input to our neural network. $P(x_t) = P(x_1 | BOS)$* End Of Sequence (__EOS__) - you guess it... this token is at the end of each sequence. The catch is that it should __not__ occur anywhere else except at the very end. If our model produces this token, the sequence is over.
###Code
BOS, EOS = ' ', '\n'
data = pd.read_json("./arxivData.json")
lines = data.apply(lambda row: (row['title'] + ' ; ' + row['summary'])[:512], axis=1) \
.apply(lambda line: BOS + line.replace(EOS, ' ') + EOS) \
.tolist()
# if you missed the seminar, download data here - https://yadi.sk/d/_nGyU2IajjR9-w
###Output
_____no_output_____
###Markdown
Our next step is __building char-level vocabulary__. Put simply, you need to assemble a list of all unique tokens in the dataset.
###Code
# get all unique characters from lines (including capital letters and symbols)
tokens = <YOUR CODE>
tokens = sorted(tokens)
n_tokens = len(tokens)
print ('n_tokens = ',n_tokens)
assert 100 < n_tokens < 150
assert BOS in tokens, EOS in tokens
###Output
_____no_output_____
###Markdown
We can now assign each character with it's index in tokens list. This way we can encode a string into a TF-friendly integer vector.
###Code
# dictionary of character -> its identifier (index in tokens list)
token_to_id = <YOUR CODE>
assert len(tokens) == len(token_to_id), "dictionaries must have same size"
for i in range(n_tokens):
assert token_to_id[tokens[i]] == i, "token identifier must be it's position in tokens list"
print("Seems alright!")
###Output
_____no_output_____
###Markdown
Our final step is to assemble several strings in a integet matrix `[batch_size, text_length]`. The only problem is that each sequence has a different length. We can work around that by padding short sequences with extra _EOS_ or cropping long sequences. Here's how it works:
###Code
def to_matrix(lines, max_len=None, pad=token_to_id[EOS], dtype='int32'):
"""Casts a list of lines into tf-digestable matrix"""
max_len = max_len or max(map(len, lines))
lines_ix = np.full([len(lines), max_len], pad, dtype=dtype)
for i in range(len(lines)):
line_ix = list(map(token_to_id.get, lines[i][:max_len]))
lines_ix[i, :len(line_ix)] = line_ix
return lines_ix
#Example: cast 4 random names to matrices, pad with zeros
dummy_lines = [
' abc\n',
' abacaba\n',
' abc1234567890\n',
]
print(to_matrix(dummy_lines))
###Output
_____no_output_____
###Markdown
Neural Language Model (2 points including training)Just like for N-gram LMs, we want to estimate probability of text as a joint probability of tokens (symbols this time).$$P(X) = \prod_t P(x_t \mid x_0, \dots, x_{t-1}).$$ Instead of counting all possible statistics, we want to train a neural network with parameters $\theta$ that estimates the conditional probabilities:$$ P(x_t \mid x_0, \dots, x_{t-1}) \approx p(x_t \mid x_0, \dots, x_{t-1}, \theta) $$But before we optimize, we need to define our neural network. Let's start with a fixed-window (aka convolutional) architecture:
###Code
import tensorflow as tf
keras, L = tf.keras, tf.keras.layers
assert tf.__version__.startswith('2'), "Current tf version: {}; required: 2.0.*".format(tf.__version__)
class FixedWindowLanguageModel(L.Layer):
def __init__(self, n_tokens=n_tokens, emb_size=16, hid_size=64):
"""
A fixed window model that looks on at least 5 previous symbols.
Note: fixed window LM is effectively performing a convolution over a sequence of words.
This convolution only looks on current and previous words.
Such convolution can be represented as a sequence of 2 operations:
- pad input vectors by {strides * (filter_size - 1)} zero vectors on the "left", do not pad right
- perform regular convolution with {filter_size} and {strides}
- If you're absolutely lost, here's a hint: use ZeroPadding1D and Conv1D from keras.layers
You can stack several convolutions at once
"""
super().__init__() # initialize base class to track sub-layers, trainable variables, etc.
#YOUR CODE - create layers/variables and any metadata you want, e.g. self.emb = L.Embedding(...)
<...>
#END OF YOUR CODE
def __call__(self, input_ix):
"""
compute language model logits given input tokens
:param input_ix: batch of sequences with token indices, tf tensor: int32[batch_size, sequence_length]
:returns: pre-softmax linear outputs of language model [batch_size, sequence_length, n_tokens]
these outputs will be used as logits to compute P(x_t | x_0, ..., x_{t - 1})
"""
# YOUR CODE - apply layers, see docstring above
return <...>
def get_possible_next_tokens(self, prefix=BOS, temperature=1.0, max_len=100):
""" :returns: probabilities of next token, dict {token : prob} for all tokens """
prefix_ix = tf.convert_to_tensor(to_matrix([prefix]), tf.int32)
probs = tf.nn.softmax(self(prefix_ix)[0, -1]).numpy() # shape: [n_tokens]
return dict(zip(tokens, probs))
model = FixedWindowLanguageModel()
# note: tensorflow and keras layers create variables only after they're first applied (called)
dummy_input_ix = tf.constant(to_matrix(dummy_lines))
dummy_logits = model(dummy_input_ix)
print('Weights:', tuple(w.name for w in model.trainable_variables))
assert isinstance(dummy_logits, tf.Tensor)
assert dummy_logits.shape == (len(dummy_lines), max(map(len, dummy_lines)), n_tokens), "please check output shape"
assert np.all(np.isfinite(dummy_logits)), "inf/nan encountered"
assert not np.allclose(dummy_logits.numpy().sum(-1), 1), "please predict linear outputs, don't use softmax (maybe you've just got unlucky)"
# test for lookahead
dummy_input_ix_2 = tf.constant(to_matrix([line[:3] + 'e' * (len(line) - 3) for line in dummy_lines]))
dummy_logits_2 = model(dummy_input_ix_2)
assert np.allclose(dummy_logits[:, :3] - dummy_logits_2[:, :3], 0), "your model's predictions depend on FUTURE tokens. " \
" Make sure you don't allow any layers to look ahead of current token." \
" You can also get this error if your model is not deterministic (e.g. dropout). Disable it for this test."
###Output
_____no_output_____
###Markdown
We can now tune our network's parameters to minimize categorical crossentropy over training dataset $D$:$$ L = {\frac1{|D|}} \sum_{X \in D} \sum_{x_i \in X} - \log p(x_t \mid x_1, \dots, x_{t-1}, \theta) $$As usual with with neural nets, this optimization is performed via stochastic gradient descent with backprop. One can also note that minimizing crossentropy is equivalent to minimizing model __perplexity__, KL-divergence or maximizng log-likelihood.
###Code
def compute_lengths(input_ix, eos_ix=token_to_id[EOS]):
""" compute length of each line in input ix (incl. first EOS), int32 vector of shape [batch_size] """
count_eos = tf.cumsum(tf.cast(tf.equal(input_ix, eos_ix), tf.int32), axis=1, exclusive=True)
lengths = tf.reduce_sum(tf.cast(tf.equal(count_eos, 0), tf.int32), axis=1)
return lengths
print('matrix:\n', dummy_input_ix.numpy())
print('lengths:', compute_lengths(dummy_input_ix).numpy())
def compute_loss(model, input_ix):
"""
:param model: language model that can compute next token logits given token indices
:param input ix: int32 matrix of tokens, shape: [batch_size, length]; padded with eos_ix
"""
input_ix = tf.convert_to_tensor(input_ix, dtype=tf.int32)
logits = model(input_ix[:, :-1])
reference_answers = input_ix[:, 1:]
# Your task: implement loss function as per formula above
# your loss should only be computed on actual tokens, excluding padding
# predicting actual tokens and first EOS do count. Subsequent EOS-es don't
# you will likely need to use compute_lengths and/or tf.sequence_mask to get it right.
<YOUR CODE>
return <YOUR CODE: return scalar loss>
loss_1 = compute_loss(model, to_matrix(dummy_lines, max_len=15))
loss_2 = compute_loss(model, to_matrix(dummy_lines, max_len=16))
assert (np.ndim(loss_1) == 0) and (0 < loss_1 < 100), "loss must be a positive scalar"
assert np.allclose(loss_1, loss_2), 'do not include AFTER first EOS into loss. '\
'Hint: use tf.sequence_mask. Beware +/-1 errors. And be careful when averaging!'
###Output
_____no_output_____
###Markdown
EvaluationYou will need two functions: one to compute test loss and another to generate samples. For your convenience, we implemented them both in your stead.
###Code
def score_lines(model, dev_lines, batch_size):
""" computes average loss over the entire dataset """
dev_loss_num, dev_loss_len = 0., 0.
for i in range(0, len(dev_lines), batch_size):
batch_ix = to_matrix(dev_lines[i: i + batch_size])
dev_loss_num += compute_loss(model, batch_ix) * len(batch_ix)
dev_loss_len += len(batch_ix)
return dev_loss_num / dev_loss_len
def generate(model, prefix=BOS, temperature=1.0, max_len=100):
"""
Samples output sequence from probability distribution obtained by model
:param temperature: samples proportionally to model probabilities ^ temperature
if temperature == 0, always takes most likely token. Break ties arbitrarily.
"""
while True:
token_probs = model.get_possible_next_tokens(prefix)
tokens, probs = zip(*token_probs.items())
if temperature == 0:
next_token = tokens[np.argmax(probs)]
else:
probs = np.array([p ** (1. / temperature) for p in probs])
probs /= sum(probs)
next_token = np.random.choice(tokens, p=probs)
prefix += next_token
if next_token == EOS or len(prefix) > max_len: break
return prefix
###Output
_____no_output_____
###Markdown
Training loopFinally, let's train our model on minibatches of data
###Code
from sklearn.model_selection import train_test_split
train_lines, dev_lines = train_test_split(lines, test_size=0.25, random_state=42)
batch_size = 256
score_dev_every = 250
train_history, dev_history = [], []
optimizer = keras.optimizers.Adam()
# score untrained model
dev_history.append((0, score_lines(model, dev_lines, batch_size)))
print("Sample before training:", generate(model, 'Bridging'))
from IPython.display import clear_output
from random import sample
from tqdm import trange
for i in trange(len(train_history), 5000):
batch = to_matrix(sample(train_lines, batch_size))
with tf.GradientTape() as tape:
loss_i = compute_loss(model, batch)
grads = tape.gradient(loss_i, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_history.append((i, loss_i.numpy()))
if (i + 1) % 50 == 0:
clear_output(True)
plt.scatter(*zip(*train_history), alpha=0.1, label='train_loss')
if len(dev_history):
plt.plot(*zip(*dev_history), color='red', label='dev_loss')
plt.legend(); plt.grid(); plt.show()
print("Generated examples (tau=0.5):")
for _ in range(3):
print(generate(model, temperature=0.5))
if (i + 1) % score_dev_every == 0:
print("Scoring dev...")
dev_history.append((i, score_lines(model, dev_lines, batch_size)))
print('#%i Dev loss: %.3f' % dev_history[-1])
assert np.mean(train_history[:10], axis=0)[1] > np.mean(train_history[-10:], axis=0)[1], "The model didn't converge."
print("Final dev loss:", dev_history[-1][-1])
for i in range(10):
print(generate(model, temperature=0.5))
###Output
_____no_output_____
###Markdown
RNN Language Models (3 points including training)Fixed-size architectures are reasonably good when capturing short-term dependencies, but their design prevents them from capturing any signal outside their window. We can mitigate this problem by using a __recurrent neural network__:$$ h_0 = \vec 0 ; \quad h_{t+1} = RNN(x_t, h_t) $$$$ p(x_t \mid x_0, \dots, x_{t-1}, \theta) = dense_{softmax}(h_{t-1}) $$Such model processes one token at a time, left to right, and maintains a hidden state vector between them. Theoretically, it can learn arbitrarily long temporal dependencies given large enough hidden size.
###Code
class RNNLanguageModel(L.Layer):
def __init__(self, n_tokens=n_tokens, emb_size=16, hid_size=256):
"""
Build a recurrent language model.
You are free to choose anything you want, but the recommended architecture is
- token embeddings
- one or more LSTM/GRU layers with hid size
- linear layer to predict logits
"""
super().__init__() # initialize base class to track sub-layers, trainable variables, etc.
# YOUR CODE - create layers/variables/etc
<...>
#END OF YOUR CODE
def __call__(self, input_ix):
"""
compute language model logits given input tokens
:param input_ix: batch of sequences with token indices, tf tensor: int32[batch_size, sequence_length]
:returns: pre-softmax linear outputs of language model [batch_size, sequence_length, n_tokens]
these outputs will be used as logits to compute P(x_t | x_0, ..., x_{t - 1})
"""
#YOUR CODE
return <...>
def get_possible_next_tokens(self, prefix=BOS, temperature=1.0, max_len=100):
""" :returns: probabilities of next token, dict {token : prob} for all tokens """
prefix_ix = tf.convert_to_tensor(to_matrix([prefix]), tf.int32)
probs = tf.nn.softmax(self(prefix_ix)[0, -1]).numpy() # shape: [n_tokens]
return dict(zip(tokens, probs))
model = RNNLanguageModel()
# note: tensorflow and keras layers create variables only after they're first applied (called)
dummy_input_ix = tf.constant(to_matrix(dummy_lines))
dummy_logits = model(dummy_input_ix)
assert isinstance(dummy_logits, tf.Tensor)
assert dummy_logits.shape == (len(dummy_lines), max(map(len, dummy_lines)), n_tokens), "please check output shape"
assert np.all(np.isfinite(dummy_logits)), "inf/nan encountered"
assert not np.allclose(dummy_logits.numpy().sum(-1), 1), "please predict linear outputs, don't use softmax (maybe you've just got unlucky)"
print('Weights:', tuple(w.name for w in model.trainable_variables))
# test for lookahead
dummy_input_ix_2 = tf.constant(to_matrix([line[:3] + 'e' * (len(line) - 3) for line in dummy_lines]))
dummy_logits_2 = model(dummy_input_ix_2)
assert np.allclose(dummy_logits[:, :3] - dummy_logits_2[:, :3], 0), "your model's predictions depend on FUTURE tokens. " \
" Make sure you don't allow any layers to look ahead of current token." \
" You can also get this error if your model is not deterministic (e.g. dropout). Disable it for this test."
###Output
_____no_output_____
###Markdown
RNN trainingOur RNN language model should optimize the same loss function as fixed-window model. But there's a catch. Since RNN recurrently multiplies gradients through many time-steps, gradient values may explode, [ruining](https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/nan.jpg) your model.The common solution to that problem is to clip gradients either [individually](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/clip_by_value) or [globally](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/clip_by_global_norm).Your task here is to prepare tensorflow graph that would minimize the same loss function. If you encounter large loss fluctuations during training, please add gradient clipping using urls above._Note: gradient clipping is not exclusive to RNNs. Convolutional networks with enough depth often suffer from the same issue._
###Code
batch_size = 64 # <-- please tune batch size to fit your CPU/GPU configuration
score_dev_every = 250
train_history, dev_history = [], []
optimizer = keras.optimizers.Adam()
# score untrained model
dev_history.append((0, score_lines(model, dev_lines, batch_size)))
print("Sample before training:", generate(model, 'Bridging'))
for i in trange(len(train_history), 5000):
batch = to_matrix(sample(train_lines, batch_size))
with tf.GradientTape() as tape:
loss_i = compute_loss(model, batch)
grads = tape.gradient(loss_i, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_history.append((i, loss_i.numpy()))
if (i + 1) % 50 == 0:
clear_output(True)
plt.scatter(*zip(*train_history), alpha=0.1, label='train_loss')
if len(dev_history):
plt.plot(*zip(*dev_history), color='red', label='dev_loss')
plt.legend(); plt.grid(); plt.show()
print("Generated examples (tau=0.5):")
for _ in range(3):
print(generate(model, temperature=0.5))
if (i + 1) % score_dev_every == 0:
print("Scoring dev...")
dev_history.append((i, score_lines(model, dev_lines, batch_size)))
print('#%i Dev loss: %.3f' % dev_history[-1])
assert np.mean(train_history[:10], axis=0)[1] > np.mean(train_history[-10:], axis=0)[1], "The model didn't converge."
print("Final dev loss:", dev_history[-1][-1])
for i in range(10):
print(generate(model, temperature=0.5))
###Output
_____no_output_____
###Markdown
Alternative sampling strategies (1 point)So far we've sampled tokens from the model in proportion with their probability.However, this approach can sometimes generate nonsense words due to the fact that softmax probabilities of these words are never exactly zero. This issue can be somewhat mitigated with sampling temperature, but low temperature harms sampling diversity. Can we remove the nonsense words without sacrificing diversity? __Yes, we can!__ But it takes a different sampling strategy.__Top-k sampling:__ on each step, sample the next token from __k most likely__ candidates from the language model.Suppose $k=3$ and the token probabilities are $p=[0.1, 0.35, 0.05, 0.2, 0.3]$. You first need to select $k$ most likely words and set the probability of the rest to zero: $\hat p=[0.0, 0.35, 0.0, 0.2, 0.3]$ and re-normalize: $p^*\approx[0.0, 0.412, 0.0, 0.235, 0.353]$.__Nucleus sampling:__ similar to top-k sampling, but this time we select $k$ dynamically. In nucleous sampling, we sample from top-__N%__ fraction of the probability mass.Using the same $p=[0.1, 0.35, 0.05, 0.2, 0.3]$ and nucleous N=0.9, the nucleous words consist of:1. most likely token $w_2$, because $p(w_2) < N$2. second most likely token $w_5$, $p(w_2) + p(w_5) = 0.65 < N$3. third most likely token $w_4$ because $p(w_2) + p(w_5) + p(w_4) = 0.85 < N$And thats it, because the next most likely word would overflow: $p(w_2) + p(w_5) + p(w_4) + p(w_1) = 0.95 > N$.After you've selected the nucleous words, you need to re-normalize them as in top-k sampling and generate the next token.__Your task__ is to implement nucleus sampling variant and see if its any good.
###Code
def generate_nucleus(model, prefix=BOS, nucleus=0.9, max_len=100):
"""
Generate a sequence with nucleous sampling
:param prefix: a string containing space-separated previous tokens
:param nucleus: N from the formulae above, N \in [0, 1]
:param max_len: generate sequences with at most this many tokens, including prefix
:note: make sure that nucleous always contains at least one word, even if p(w*) > nucleus
"""
while True:
token_probs = model.get_possible_next_tokens(prefix)
tokens, probs = zip(*token_probs.items())
<YOUR CODE HERE>
prefix += <YOUR CODE>
if next_token == EOS or len(prefix) > max_len: break
return prefix
for i in range(10):
print(generate_nucleous(model, nucleous_size=PLAY_WITH_ME_SENPAI))
###Output
_____no_output_____
###Markdown
Homework: going neural (6 pts)We've checked out statistical approaches to language models in the last notebook. Now let's go find out what deep learning has to offer.We're gonna use the same dataset as before, except this time we build a language model that's character-level, not word level. Before you go:* If you haven't done seminar already, use `seminar.ipynb` to download the data.* This homework uses TensorFlow v2.0: this is [how you install it](https://www.tensorflow.org/beta); and that's [how you use it](https://colab.research.google.com/drive/1YtfbZGgzKr7fpBTqkdEQtu4vUALoTv8A).
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Working on character level means that we don't need to deal with large vocabulary or missing words. Heck, we can even keep uppercase words in text! The downside, however, is that all our sequences just got a lot longer.However, we still need special tokens:* Begin Of Sequence (__BOS__) - this token is at the start of each sequence. We use it so that we always have non-empty input to our neural network. $P(x_t) = P(x_1 | BOS)$* End Of Sequence (__EOS__) - you guess it... this token is at the end of each sequence. The catch is that it should __not__ occur anywhere else except at the very end. If our model produces this token, the sequence is over.
###Code
# Alternative manual download link: https://yadi.sk/d/_nGyU2IajjR9-w
!wget "https://www.dropbox.com/s/99az9n1b57qkd9j/arxivData.json.tar.gz?dl=1" -O arxivData.json.tar.gz
!tar -xvzf arxivData.json.tar.gz
data = pd.read_json("./arxivData.json")
data.sample(n=5)
BOS, EOS = ' ', '\n'
data = pd.read_json("./arxivData.json")
lines = data.apply(lambda row: (row['title'] + ' ; ' + row['summary'])[:512], axis=1) \
.apply(lambda line: BOS + line.replace(EOS, ' ') + EOS) \
.tolist()
# if you missed the seminar, download data here - https://yadi.sk/d/_nGyU2IajjR9-w
###Output
_____no_output_____
###Markdown
Our next step is __building char-level vocabulary__. Put simply, you need to assemble a list of all unique tokens in the dataset.
###Code
# get all unique characters from lines (including capital letters and symbols)
tokens = set(char for line in lines for char in line)
tokens = sorted(tokens)
n_tokens = len(tokens)
print ('n_tokens = ',n_tokens)
assert 100 < n_tokens < 150
assert BOS in tokens, EOS in tokens
###Output
_____no_output_____
###Markdown
We can now assign each character with it's index in tokens list. This way we can encode a string into a TF-friendly integer vector.
###Code
# dictionary of character -> its identifier (index in tokens list)
token_to_id = {char: i for i, char in enumerate(tokens)}
assert len(tokens) == len(token_to_id), "dictionaries must have same size"
for i in range(n_tokens):
assert token_to_id[tokens[i]] == i, "token identifier must be it's position in tokens list"
print("Seems alright!")
###Output
_____no_output_____
###Markdown
Our final step is to assemble several strings in a integet matrix `[batch_size, text_length]`. The only problem is that each sequence has a different length. We can work around that by padding short sequences with extra _EOS_ or cropping long sequences. Here's how it works:
###Code
def to_matrix(lines, max_len=None, pad=token_to_id[EOS], dtype='int32'):
"""Casts a list of lines into tf-digestable matrix"""
max_len = max_len or max(map(len, lines))
lines_ix = np.full([len(lines), max_len], pad, dtype=dtype)
for i in range(len(lines)):
line_ix = list(map(token_to_id.get, lines[i][:max_len]))
lines_ix[i, :len(line_ix)] = line_ix
return lines_ix
#Example: cast 4 random names to matrices, pad with zeros
dummy_lines = [
' abc\n',
' abacaba\n',
' abc1234567890\n',
]
print(to_matrix(dummy_lines))
###Output
_____no_output_____
###Markdown
Neural Language Model (2 points including training)Just like for N-gram LMs, we want to estimate probability of text as a joint probability of tokens (symbols this time).$$P(X) = \prod_t P(x_t \mid x_0, \dots, x_{t-1}).$$ Instead of counting all possible statistics, we want to train a neural network with parameters $\theta$ that estimates the conditional probabilities:$$ P(x_t \mid x_0, \dots, x_{t-1}) \approx p(x_t \mid x_0, \dots, x_{t-1}, \theta) $$But before we optimize, we need to define our neural network. Let's start with a fixed-window (aka convolutional) architecture:
###Code
import tensorflow as tf
keras, L = tf.keras, tf.keras.layers
assert tf.__version__.startswith('2'), "Current tf version: {}; required: 2.0.*".format(tf.__version__)
class FixedWindowLanguageModel(L.Layer):
def __init__(self, n_tokens=n_tokens, emb_size=16, hid_size=64):
"""
A fixed window model that looks on at least 5 previous symbols.
Note: fixed window LM is effectively performing a convolution over a sequence of words.
This convolution only looks on current and previous words.
Such convolution can be represented as a sequence of 2 operations:
- pad input vectors by {strides * (filter_size - 1)} zero vectors on the "left", do not pad right
- perform regular convolution with {filter_size} and {strides}
- If you're absolutely lost, here's a hint: use ZeroPadding1D and Conv1D from keras.layers
You can stack several convolutions at once
"""
super().__init__() # initialize base class to track sub-layers, trainable variables, etc.
#YOUR CODE - create layers/variables and any metadata you want, e.g. self.emb = L.Embedding(...)
strides = 1
batch_input = L.Input(shape=(None, ), dtype='int32')
emb_layer = L.Embedding(input_dim = n_tokens, output_dim = emb_size)
embedded_input = emb_layer(batch_input)
conv_5 = L.Conv1D(filters=hid_size, kernel_size=5, strides=strides, padding='causal')(embedded_input)
conv_6 = L.Conv1D(filters=hid_size, kernel_size=6, strides=strides, padding='causal')(embedded_input)
conv_7 = L.Conv1D(filters=hid_size, kernel_size=7, strides=strides, padding='causal')(embedded_input)
concat_conv = L.Concatenate(axis=2)([conv_5, conv_6, conv_7])
logits = L.TimeDistributed(L.Dense(units=n_tokens))(concat_conv)
self.model = keras.models.Model(inputs=batch_input, outputs=logits)
#END OF YOUR CODE
def __call__(self, input_ix):
"""
compute language model logits given input tokens
:param input_ix: batch of sequences with token indices, tf tensor: int32[batch_size, sequence_length]
:returns: pre-softmax linear outputs of language model [batch_size, sequence_length, n_tokens]
these outputs will be used as logits to compute P(x_t | x_0, ..., x_{t - 1})
"""
# YOUR CODE - apply layers, see docstring above
return self.model(input_ix)
def get_possible_next_tokens(self, prefix=BOS, temperature=1.0, max_len=100):
""" :returns: probabilities of next token, dict {token : prob} for all tokens """
prefix_ix = tf.convert_to_tensor(to_matrix([prefix]), tf.int32)
probs = tf.nn.softmax(self(prefix_ix)[0, -1]).numpy() # shape: [n_tokens]
return dict(zip(tokens, probs))
model = FixedWindowLanguageModel()
# note: tensorflow and keras layers create variables only after they're first applied (called)
dummy_input_ix = tf.constant(to_matrix(dummy_lines))
dummy_logits = model(dummy_input_ix)
print('Weights:', tuple(w.name for w in model.trainable_variables))
assert isinstance(dummy_logits, tf.Tensor)
assert dummy_logits.shape == (len(dummy_lines), max(map(len, dummy_lines)), n_tokens), "please check output shape"
assert np.all(np.isfinite(dummy_logits)), "inf/nan encountered"
assert not np.allclose(dummy_logits.numpy().sum(-1), 1), "please predict linear outputs, don't use softmax (maybe you've just got unlucky)"
# test for lookahead
dummy_input_ix_2 = tf.constant(to_matrix([line[:3] + 'e' * (len(line) - 3) for line in dummy_lines]))
dummy_logits_2 = model(dummy_input_ix_2)
assert np.allclose(dummy_logits[:, :3] - dummy_logits_2[:, :3], 0), "your model's predictions depend on FUTURE tokens. " \
" Make sure you don't allow any layers to look ahead of current token." \
" You can also get this error if your model is not deterministic (e.g. dropout). Disable it for this test."
###Output
_____no_output_____
###Markdown
We can now tune our network's parameters to minimize categorical crossentropy over training dataset $D$:$$ L = {\frac1{|D|}} \sum_{X \in D} \sum_{x_i \in X} - \log p(x_t \mid x_1, \dots, x_{t-1}, \theta) $$As usual with with neural nets, this optimization is performed via stochastic gradient descent with backprop. One can also note that minimizing crossentropy is equivalent to minimizing model __perplexity__, KL-divergence or maximizng log-likelihood.
###Code
def compute_lengths(input_ix, eos_ix=token_to_id[EOS]):
""" compute length of each line in input ix (incl. first EOS), int32 vector of shape [batch_size] """
count_eos = tf.cumsum(tf.cast(tf.equal(input_ix, eos_ix), tf.int32), axis=1, exclusive=True)
lengths = tf.reduce_sum(tf.cast(tf.equal(count_eos, 0), tf.int32), axis=1)
return lengths
print('matrix:\n', dummy_input_ix.numpy())
print('lengths:', compute_lengths(dummy_input_ix).numpy())
def compute_loss(model, input_ix):
"""
:param model: language model that can compute next token logits given token indices
:param input ix: int32 matrix of tokens, shape: [batch_size, length]; padded with eos_ix
"""
input_ix = tf.convert_to_tensor(input_ix, dtype=tf.int32)
logits = model(input_ix[:, :-1])
probs = tf.nn.softmax(logits)
reference_answers = input_ix[:, 1:]
reference_answers_ohe = tf.one_hot(reference_answers, depth=n_tokens)
# Your task: implement loss function as per formula above
# your loss should only be computed on actual tokens, excluding padding
# predicting actual tokens and first EOS do count. Subsequent EOS-es don't
# you will likely need to use compute_lengths and/or tf.sequence_mask to get it right.
lengths = tf.subtract(compute_lengths(reference_answers), 1)
length_mask = tf.sequence_mask(lengths, tf.shape(reference_answers)[1], dtype=tf.float32)
multiply_probs = tf.multiply(reference_answers_ohe, probs)
probs_true_symbols = tf.reduce_max(multiply_probs, axis=-1)
log_probs = tf.math.log(probs_true_symbols)
log_probs_without_inf = tf.where(tf.math.is_inf(log_probs), tf.ones_like(log_probs) * (-3), log_probs)
log_probs_end = tf.multiply(log_probs_without_inf, length_mask)
# print(log_probs_without_inf.shape)
sum_probs = tf.reduce_sum(log_probs_end, axis=-1)
loss = -tf.reduce_mean(sum_probs)
return loss
loss_1 = compute_loss(model, to_matrix(dummy_lines, max_len=15))
loss_2 = compute_loss(model, to_matrix(dummy_lines, max_len=16))
assert (np.ndim(loss_1) == 0) and (0 < loss_1 < 100), "loss must be a positive scalar"
assert np.allclose(loss_1, loss_2), 'do not include AFTER first EOS into loss. '\
'Hint: use tf.sequence_mask. Beware +/-1 errors. And be careful when averaging!'
###Output
_____no_output_____
###Markdown
EvaluationYou will need two functions: one to compute test loss and another to generate samples. For your convenience, we implemented them both in your stead.
###Code
def score_lines(model, dev_lines, batch_size):
""" computes average loss over the entire dataset """
dev_loss_num, dev_loss_len = 0., 0.
for i in range(0, len(dev_lines), batch_size):
batch_ix = to_matrix(dev_lines[i: i + batch_size])
dev_loss_num += compute_loss(model, batch_ix) * len(batch_ix)
dev_loss_len += len(batch_ix)
return dev_loss_num / dev_loss_len
def generate(model, prefix=BOS, temperature=1.0, max_len=100):
"""
Samples output sequence from probability distribution obtained by model
:param temperature: samples proportionally to model probabilities ^ temperature
if temperature == 0, always takes most likely token. Break ties arbitrarily.
"""
while True:
token_probs = model.get_possible_next_tokens(prefix)
tokens, probs = zip(*token_probs.items())
if temperature == 0:
next_token = tokens[np.argmax(probs)]
else:
probs = np.array([p ** (1. / temperature) for p in probs])
probs /= sum(probs)
next_token = np.random.choice(tokens, p=probs)
prefix += next_token
if next_token == EOS or len(prefix) > max_len: break
return prefix
###Output
_____no_output_____
###Markdown
Training loopFinally, let's train our model on minibatches of data
###Code
from sklearn.model_selection import train_test_split
train_lines, dev_lines = train_test_split(lines, test_size=0.25, random_state=42)
batch_size = 256
score_dev_every = 250
train_history, dev_history = [], []
optimizer = keras.optimizers.Adam()
# score untrained model
dev_history.append((0, score_lines(model, dev_lines, batch_size)))
print("Sample before training:", generate(model, 'Bridging'))
from IPython.display import clear_output
from random import sample
from tqdm import trange
for i in trange(len(train_history), 5000):
batch = to_matrix(sample(train_lines, batch_size))
with tf.GradientTape() as tape:
loss_i = compute_loss(model, batch)
grads = tape.gradient(loss_i, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_history.append((i, loss_i.numpy()))
if (i + 1) % 50 == 0:
clear_output(True)
plt.scatter(*zip(*train_history), alpha=0.1, label='train_loss')
if len(dev_history):
plt.plot(*zip(*dev_history), color='red', label='dev_loss')
plt.legend(); plt.grid(); plt.show()
print("Generated examples (tau=0.5):")
for _ in range(3):
print(generate(model, temperature=0.5))
if (i + 1) % score_dev_every == 0:
print("Scoring dev...")
dev_history.append((i, score_lines(model, dev_lines, batch_size)))
print('#%i Dev loss: %.3f' % dev_history[-1])
assert np.mean(train_history[:10], axis=0)[1] > np.mean(train_history[-10:], axis=0)[1], "The model didn't converge."
print("Final dev loss:", dev_history[-1][-1])
for i in range(10):
print(generate(model, temperature=0.5))
###Output
_____no_output_____
###Markdown
RNN Language Models (3 points including training)Fixed-size architectures are reasonably good when capturing short-term dependencies, but their design prevents them from capturing any signal outside their window. We can mitigate this problem by using a __recurrent neural network__:$$ h_0 = \vec 0 ; \quad h_{t+1} = RNN(x_t, h_t) $$$$ p(x_t \mid x_0, \dots, x_{t-1}, \theta) = dense_{softmax}(h_{t-1}) $$Such model processes one token at a time, left to right, and maintains a hidden state vector between them. Theoretically, it can learn arbitrarily long temporal dependencies given large enough hidden size.
###Code
class RNNLanguageModel(L.Layer):
def __init__(self, n_tokens=n_tokens, emb_size=16, hid_size=256):
"""
Build a recurrent language model.
You are free to choose anything you want, but the recommended architecture is
- token embeddings
- one or more LSTM/GRU layers with hid size
- linear layer to predict logits
"""
super().__init__() # initialize base class to track sub-layers, trainable variables, etc.
# YOUR CODE - create layers/variables/etc
batch_input = L.Input(shape=(None,), dtype='int32')
emb_layer = L.Embedding(input_dim=n_tokens, output_dim=emb_size)
embedded_input = emb_layer(batch_input)
hidden_states = L.LSTM(units=hid_size, return_sequences=True)(embedded_input)
logits = L.TimeDistributed(L.Dense(units=n_tokens))(hidden_states)
self.model = keras.models.Model(inputs=batch_input, outputs=logits)
#END OF YOUR CODE
def __call__(self, input_ix):
"""
compute language model logits given input tokens
:param input_ix: batch of sequences with token indices, tf tensor: int32[batch_size, sequence_length]
:returns: pre-softmax linear outputs of language model [batch_size, sequence_length, n_tokens]
these outputs will be used as logits to compute P(x_t | x_0, ..., x_{t - 1})
"""
#YOUR CODE
return self.model(input_ix)
def get_possible_next_tokens(self, prefix=BOS, temperature=1.0, max_len=100):
""" :returns: probabilities of next token, dict {token : prob} for all tokens """
prefix_ix = tf.convert_to_tensor(to_matrix([prefix]), tf.int32)
probs = tf.nn.softmax(self(prefix_ix)[0, -1]).numpy() # shape: [n_tokens]
return dict(zip(tokens, probs))
model = RNNLanguageModel()
# note: tensorflow and keras layers create variables only after they're first applied (called)
dummy_input_ix = tf.constant(to_matrix(dummy_lines))
dummy_logits = model(dummy_input_ix)
assert isinstance(dummy_logits, tf.Tensor)
assert dummy_logits.shape == (len(dummy_lines), max(map(len, dummy_lines)), n_tokens), "please check output shape"
assert np.all(np.isfinite(dummy_logits)), "inf/nan encountered"
assert not np.allclose(dummy_logits.numpy().sum(-1), 1), "please predict linear outputs, don't use softmax (maybe you've just got unlucky)"
print('Weights:', tuple(w.name for w in model.trainable_variables))
# test for lookahead
dummy_input_ix_2 = tf.constant(to_matrix([line[:3] + 'e' * (len(line) - 3) for line in dummy_lines]))
dummy_logits_2 = model(dummy_input_ix_2)
assert np.allclose(dummy_logits[:, :3] - dummy_logits_2[:, :3], 0), "your model's predictions depend on FUTURE tokens. " \
" Make sure you don't allow any layers to look ahead of current token." \
" You can also get this error if your model is not deterministic (e.g. dropout). Disable it for this test."
###Output
_____no_output_____
###Markdown
RNN trainingOur RNN language model should optimize the same loss function as fixed-window model. But there's a catch. Since RNN recurrently multiplies gradients through many time-steps, gradient values may explode, [ruining](https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/nan.jpg) your model.The common solution to that problem is to clip gradients either [individually](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/clip_by_value) or [globally](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/clip_by_global_norm).Your task here is to prepare tensorflow graph that would minimize the same loss function. If you encounter large loss fluctuations during training, please add gradient clipping using urls above._Note: gradient clipping is not exclusive to RNNs. Convolutional networks with enough depth often suffer from the same issue._
###Code
batch_size = 64 # <-- please tune batch size to fit your CPU/GPU configuration
score_dev_every = 250
train_history, dev_history = [], []
optimizer = keras.optimizers.Adam()
# score untrained model
dev_history.append((0, score_lines(model, dev_lines, batch_size)))
print("Sample before training:", generate(model, 'Bridging'))
for i in trange(len(train_history), 5000):
batch = to_matrix(sample(train_lines, batch_size))
with tf.GradientTape() as tape:
loss_i = compute_loss(model, batch)
grads = tape.gradient(loss_i, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_history.append((i, loss_i.numpy()))
if (i + 1) % 50 == 0:
clear_output(True)
plt.scatter(*zip(*train_history), alpha=0.1, label='train_loss')
if len(dev_history):
plt.plot(*zip(*dev_history), color='red', label='dev_loss')
plt.legend(); plt.grid(); plt.show()
print("Generated examples (tau=0.5):")
for _ in range(3):
print(generate(model, temperature=0.5))
if (i + 1) % score_dev_every == 0:
print("Scoring dev...")
dev_history.append((i, score_lines(model, dev_lines, batch_size)))
print('#%i Dev loss: %.3f' % dev_history[-1])
assert np.mean(train_history[:10], axis=0)[1] > np.mean(train_history[-10:], axis=0)[1], "The model didn't converge."
print("Final dev loss:", dev_history[-1][-1])
for i in range(10):
print(generate(model, temperature=0.5))
###Output
_____no_output_____
###Markdown
Alternative sampling strategies (1 point)So far we've sampled tokens from the model in proportion with their probability.However, this approach can sometimes generate nonsense words due to the fact that softmax probabilities of these words are never exactly zero. This issue can be somewhat mitigated with sampling temperature, but low temperature harms sampling diversity. Can we remove the nonsense words without sacrificing diversity? __Yes, we can!__ But it takes a different sampling strategy.__Top-k sampling:__ on each step, sample the next token from __k most likely__ candidates from the language model.Suppose $k=3$ and the token probabilities are $p=[0.1, 0.35, 0.05, 0.2, 0.3]$. You first need to select $k$ most likely words and set the probability of the rest to zero: $\hat p=[0.0, 0.35, 0.0, 0.2, 0.3]$ and re-normalize: $p^*\approx[0.0, 0.412, 0.0, 0.235, 0.353]$.__Nucleus sampling:__ similar to top-k sampling, but this time we select $k$ dynamically. In nucleous sampling, we sample from top-__N%__ fraction of the probability mass.Using the same $p=[0.1, 0.35, 0.05, 0.2, 0.3]$ and nucleous N=0.9, the nucleous words consist of:1. most likely token $w_2$, because $p(w_2) < N$2. second most likely token $w_5$, $p(w_2) + p(w_5) = 0.65 < N$3. third most likely token $w_4$ because $p(w_2) + p(w_5) + p(w_4) = 0.85 < N$And thats it, because the next most likely word would overflow: $p(w_2) + p(w_5) + p(w_4) + p(w_1) = 0.95 > N$.After you've selected the nucleous words, you need to re-normalize them as in top-k sampling and generate the next token.__Your task__ is to implement nucleus sampling variant and see if its any good.
###Code
def generate_nucleus(model, prefix=BOS, nucleus=0.9, max_len=100):
"""
Generate a sequence with nucleous sampling
:param prefix: a string containing space-separated previous tokens
:param nucleus: N from the formulae above, N \in [0, 1]
:param max_len: generate sequences with at most this many tokens, including prefix
:note: make sure that nucleous always contains at least one word, even if p(w*) > nucleus
"""
while True:
token_probs = model.get_possible_next_tokens(prefix)
tokens, probs = zip(*token_probs.items())
sort_args = np.argsort(probs)[::-1]
sum_probs = 0
new_tokens = []
new_probs = []
for i in sort_args:
if sum_probs < nucleus:
new_tokens.append(tokens[i])
new_probs.append(probs[i])
sum_probs += probs[i]
else:
break
mult_probs = 1/np.sum(new_probs)
new_probs = [i*mult_probs for i in new_probs]
probs_cumsum = np.cumsum(new_probs[::-1])
probs_cumsum[-1] = 1
rand_val = np.random.uniform()
for i, val in enumerate(probs_cumsum):
if rand_val < val:
next_token = new_tokens[::-1][i]
prefix += next_token
break
if next_token == EOS or len(prefix) > max_len: break
return prefix
for i in range(10):
print(generate_nucleus(model, nucleus=0.9))
###Output
_____no_output_____
###Markdown
Bonus quest I: Beam Search (2 pts incl. samples)At times, you don't really want the model to generate diverse outputs as much as you want a __single most likely hypothesis.__ A single best translation, most likely continuation of the search query given prefix, etc. Except, you can't get it. In order to find the exact most likely sequence containing 10 tokens, you would need to enumerate all $|V|^{10}$ possible hypotheses. In practice, 9 times out of 10 you will instead find an approximate most likely output using __beam search__.Here's how it works:0. Initial `beam` = [prefix], max beam_size = k1. for T steps:2. ` ... ` generate all possible next tokens for all hypotheses in beam, formulate `len(beam) * len(vocab)` candidates3. ` ... ` select beam_size best for all candidates as new `beam`4. Select best hypothesis (-es?) from beam
###Code
from IPython.display import HTML
# Here's what it looks like:
!wget -q https://raw.githubusercontent.com/yandexdataschool/nlp_course/2020/resources/beam_search.html
HTML("beam_search.html")
def generate_beamsearch(model, prefix=BOS, beam_size=4, length=5):
"""
Generate a sequence with nucleous sampling
:param prefix: a string containing space-separated previous tokens
:param nucleus: N from the formulae above, N \in [0, 1]
:param length: generate sequences with at most this many tokens, NOT INCLUDING PREFIX
:returns: beam_size most likely candidates
:note: make sure that nucleous always contains at least one word, even if p(w*) > nucleus
"""
<YOUR CODE HERE>
return <most likely sequence>
generate_beamsearch(model, prefix=' deep ', beam_size=4)
# check it out: which beam size works best?
# find at least 5 prefixes where beam_size=1 and 8 generates different sequences
###Output
_____no_output_____
###Markdown
Bonus quest I: Beam Search (2 pts incl. samples)At times, you don't really want the model to generate diverse outputs as much as you want a __single most likely hypothesis.__ A single best translation, most likely continuation of the search query given prefix, etc. Except, you can't get it. In order to find the exact most likely sequence containing 10 tokens, you would need to enumerate all $|V|^{10}$ possible hypotheses. In practice, 9 times out of 10 you will instead find an approximate most likely output using __beam search__.Here's how it works:0. Initial `beam` = [prefix], max beam_size = k1. for T steps:2. ` ... ` generate all possible next tokens for all hypotheses in beam, formulate `len(beam) * len(vocab)` candidates3. ` ... ` select beam_size best for all candidates as new `beam`4. Select best hypothesis (-es?) from beam
###Code
from IPython.display import HTML
# Here's what it looks like:
!wget -q https://raw.githubusercontent.com/yandexdataschool/nlp_course/2020/resources/beam_search.html
HTML("beam_search.html")
def generate_beamsearch(model, prefix=BOS, beam_size=4, length=5):
"""
Generate a sequence with nucleous sampling
:param prefix: a string containing space-separated previous tokens
:param nucleus: N from the formulae above, N \in [0, 1]
:param length: generate sequences with at most this many tokens, NOT INCLUDING PREFIX
:returns: beam_size most likely candidates
:note: make sure that nucleous always contains at least one word, even if p(w*) > nucleus
"""
<YOUR CODE HERE>
return <most likely sequence>
generate_beamsearch(model, prefix=' deep ', beam_size=4)
# check it out: which beam size works best?
# find at least 5 prefixes where beam_size=1 and 8 generates different sequences
###Output
_____no_output_____ |
Scientific_workflows/DEAWaterbodies/DEAWaterbodiesToolkit/ErrorEstimationWOfS.ipynb | ###Markdown
Retrieve the matching CRS of the Landsat observations:
###Code
best_crs = mostcommon_crs(dc, product=product, query=dict(geopolygon=gpg, time=date))
###Output
_____no_output_____
###Markdown
Then load the bands WOfS uses, plus fmask and the terrain shadow mask.
###Code
bands = [
"nbart_blue",
"nbart_green",
"nbart_red",
"nbart_nir",
"nbart_swir_1",
"nbart_swir_2",
]
da = dc.load(
product,
geopolygon=datacube.utils.geometry.Geometry(wb.geometry[0].buffer(500), crs=wb.crs),
time=date,
output_crs=best_crs,
resolution=(-30, 30),
resampling="cubic",
measurements=bands + ["fmask", "oa_combined_terrain_shadow"],
)
###Output
_____no_output_____
###Markdown
Save the shadow mask for later use.
###Code
shadow_mask = ~np.array(da.isel(time=0).oa_combined_terrain_shadow, dtype=bool)
###Output
_____no_output_____
###Markdown
We can then examine the image.
###Code
landsat = da.isel(time=0)
rgb(landsat)
###Output
_____no_output_____
###Markdown
WOfS implementationThis is a functionally pure reimplementation of WOfS. This lets you do things like accelerate it with libraries that do not permit in-place modification of arrays, e.g. `jax`. This is based on the [actual implementation of WOfS](https://github.com/GeoscienceAustralia/wofs/blob/master/wofs/).This section implements two different WOfS variants. The first implementation is a hard classifier, i.e. outputs dry/wet with a sharp boundary. This should be identical to the actual implementation of WOfS, though structurally identical to our later variant. We will use this for our Monte Carlo approach.
###Code
def band_ratio(a, b):
"""
Calculates a normalised ratio index.
"""
c = (a - b) / np.where(a + b == 0, np.nan, a + b)
return c
def wofs_classify(px, jnp=np):
"""Classifiy an array of Landsat pixels as wet or dry."""
ndi_52 = band_ratio(px[4], px[1])
ndi_43 = band_ratio(px[3], px[2])
ndi_72 = band_ratio(px[5], px[1])
b1 = px[0]
b2 = px[1]
b3 = px[2]
b4 = px[3]
b5 = px[4]
b7 = px[5]
# Direct implementation of the WOfS decision tree.
return jnp.where(
ndi_52 <= -0.01,
jnp.where(
b1 <= 2083.5,
jnp.where(
b7 <= 323.5,
jnp.where(ndi_43 <= 0.61, True, False),
jnp.where(
b1 <= 1400.5,
jnp.where(
ndi_72 <= -0.23,
jnp.where(
ndi_43 <= 0.22, True, jnp.where(b1 <= 473.0, True, False)
),
jnp.where(b1 <= 379.0, True, False),
),
jnp.where(ndi_43 <= -0.01, True, False),
),
),
False,
),
jnp.where(
ndi_52 <= 0.23,
jnp.where(
b1 <= 334.5,
jnp.where(
ndi_43 <= 0.54,
jnp.where(
ndi_52 <= 0.12,
True,
jnp.where(
b3 <= 364.5,
jnp.where(b1 <= 129.5, True, False),
jnp.where(b1 <= 300.5, True, False),
),
),
False,
),
False,
),
jnp.where(
ndi_52 <= 0.34,
jnp.where(
b1 <= 249.5,
jnp.where(
ndi_43 <= 0.45,
jnp.where(
b3 <= 364.5, jnp.where(b1 <= 129.5, True, False), True
),
False,
),
False,
),
False,
),
),
)
###Output
_____no_output_____
###Markdown
The second implementation returns the probabilities associated with each leaf node, which we will use for our marginal estimation approach.
###Code
def wofs_classify_marginal(px, jnp=np):
"""Get the marginal distribution of wet or dry pixels in WOfS.
The code below applies a direct implementation of the WOfS decision tree,
except instead of returning the classification,
we return the percentage of pixels in the training set
at this leaf node that matched the classification.
These values are from Mueller+17 Figure 3.
"""
ndi_52 = band_ratio(px[4], px[1])
ndi_43 = band_ratio(px[3], px[2])
ndi_72 = band_ratio(px[5], px[1])
b1 = px[0]
b2 = px[1]
b3 = px[2]
b4 = px[3]
b5 = px[4]
b7 = px[5]
# Direct implementation of the WOfS decision tree,
# except instead of returning the classification,
# we return the percentage of pixels in the training set
# at this leaf node that matched the classification.
# These values are from Mueller+17 Figure 3.
return jnp.where(
ndi_52 <= -0.01,
jnp.where(
b1 <= 2083.5,
jnp.where(
b7 <= 323.5,
jnp.where(ndi_43 <= 0.61, 0.972, 1.000),
jnp.where(
b1 <= 1400.5,
jnp.where(
ndi_72 <= -0.23,
jnp.where(
ndi_43 <= 0.22, 0.786, jnp.where(b1 <= 473.0, 0.978, 0.967)
),
jnp.where(b1 <= 379.0, 0.831, 0.988), # Typo in the paper
),
jnp.where(ndi_43 <= -0.01, 0.977, 0.997),
),
),
0.999,
),
jnp.where(
ndi_52 <= 0.23,
jnp.where(
b1 <= 334.5,
jnp.where(
ndi_43 <= 0.54,
jnp.where(
ndi_52 <= 0.12, # Typo in the paper
0.801,
jnp.where(
b3 <= 364.5,
jnp.where(b1 <= 129.5, 0.632, 0.902),
jnp.where(b1 <= 300.5, 0.757, 0.885),
),
),
0.974,
),
0.981,
),
jnp.where(
ndi_52 <= 0.34,
jnp.where(
b1 <= 249.5,
jnp.where(
ndi_43 <= 0.45,
jnp.where(
b3 <= 364.5, jnp.where(b1 <= 129.5, 0.616, 0.940), 0.584
),
0.979,
),
0.984,
),
0.996,
),
),
)
###Output
_____no_output_____
###Markdown
We also need a function that converts our Landsat DataArray into the format expected by the classifier.
###Code
def xr_to_cube(landsat):
"""Convert an Landsat xarray Dataset to a DataArray for WOfS."""
return landsat[bands].to_array(dim="band")
landsat_cube = xr_to_cube(landsat)
###Output
_____no_output_____
###Markdown
Probabilities at leaf nodes We can call `wofs_classify_marginal` to estimate the confidence that the WOfS decision tree has for its classification of each pixel. This is a number between 0.5 and 1 where 0.5 indicates that the classification is very unsure about its classification, and 1 indicates that the classifier is very sure about its classification.Each leaf node in WOfS is either a wet node or a dry node (blue or red in the image at the top of this notebook, respectively). The confidence can be converted into a probability estimate $p(\mathrm{pixel\ is\ wet})$ in the following way: if the leaf node a pixel ends up in is wet, then the probability is just the confidence; otherwise, if the leaf node is dry, then the probability is 1 - the confidence. This transformation maps a confidence of 1 to 1 for wet pixels and 0 to dry pixels, so a maximally confident wet prediction will have a probability estimate of 1 and a maximally confident dry prediction will have a probability estimate of 0.This approach makes sense because the different leaf nodes in WOfS have different wetness rates in the training set. For example, consider the following subset of the tree:If a pixel is classified by the lower wet leaf, which contained 97.8% wet pixels during training, then we can be pretty sure that this pixel is really wet. In fact, making the standard assumption that the training data is a representative sample of all pixels, we can be *97.8% sure* that this pixel is really wet. On the other hand, if the pixel was classified by the upper wet leaf, which contained only 78.6% wet pixels during training, we're much less sure: only *78.6% sure*, in fact. In this way, keeping track of which leaf node classifies each pixel allows us to estimate the confidence of the WOfS classifier, and this is precisely what `wofs_classify_marginal` does: returns the probability at each leaf node.
###Code
plt.figure(figsize=(6, 6))
plt.imshow(
# Mask shadows.
np.ma.MaskedArray(
np.where(
wofs_classify(landsat_cube.values),
# Probability that a pixel is wet, rather than the probability that the classification is correct.
wofs_classify_marginal(landsat_cube.values),
1 - wofs_classify_marginal(landsat_cube.values),
),
shadow_mask,
),
vmin=0,
vmax=1,
cmap="coolwarm_r",
)
plt.colorbar(label="p(wet)", fraction=0.04, pad=0.04)
###Output
_____no_output_____
###Markdown
Very certain pixels will be strongly blue or strongly red in the above plot. Less certain pixels will be less saturated. The edge of the lake is uncertain, as are some pixels in the middle. Monte Carlo samplingLandsat (and indeed any instrument) produces inherently noisy observations. Each pixel has intrinsic variation—if we took the same image twice, we'd get a slightly different result, regardless of whether anything actually changed on the ground. This difference is about 11% for Landsat (but quite hard to accurately quantify; this is a topic which the DEA team may explore in future). The standard assumption is that this noise is normally distributed. Under this assumption, $x$ is normally distributed around $y$:$$ x = \sim \mathcal N(y, \sigma^2).$$We can draw from this distribution to simulate the effect of the random noise on downstream calculations. In this case, we will add some normally distributed noise to Landsat observations and then use WOfS to classify the resulting noisy observations. When averaged over many trials, this provides an estimate of how much noise affects the WOfS predictions.First define the Monte Carlo function:
###Code
def wofs_monte_carlo(ls_pixels, sigma=50, n_draws=100):
"""Generate Monte Carlo samples from WOfS assuming a given level of Gaussian noise."""
# ls_pixels is bands x height x width
# First, draw a noisy sample of the Landsat image:
# New axes have to go at the start for np.random.normal, but we expect bands to be the first channel, so transpose.
sample = np.random.normal(
ls_pixels, sigma, size=(n_draws,) + ls_pixels.shape
).transpose(1, 2, 3, 0)
# Then predict its wetness using WOfS.
predictions = wofs_classify(sample)
# Return the mean and standard deviation for each pixel.
return predictions.mean(axis=-1), predictions.std(axis=-1), predictions
###Output
_____no_output_____
###Markdown
We can then run the Monte Carlo method. We can set the noise to an unrealistically high value to help see the effects.
###Code
mc = wofs_monte_carlo(landsat_cube, sigma=100)
###Output
_____no_output_____
###Markdown
This actually produces 100 slightly different WOfS classifications of our waterbody:
###Code
fig, axs = plt.subplots(10, 10, figsize=(15, 15))
for i in range(100):
ax = axs[i // 10, i % 10]
ax.axis('off')
ax.imshow(mc[2][:, :, i], vmin=0, vmax=1, cmap='coolwarm_r', interpolation='gaussian')
###Output
_____no_output_____
###Markdown
We can then estimate the probability that each pixel is wet by averaging over all of these samples.
###Code
plt.figure(figsize=(6, 6))
plt.imshow(
np.ma.MaskedArray(mc[0], shadow_mask),
cmap="coolwarm_r",
vmin=0,
vmax=1,
interpolation="nearest",
)
plt.colorbar(label="p(wet)", fraction=0.04, pad=0.04)
###Output
_____no_output_____
###Markdown
If a pixel has a very uncertain classification, it will be very easy to flip that classification by adding a small value to the pixel. So a pixel that flips a lot is uncertain. The average here is providing an estimate of the Bernoulli distribution that controls each pixel (a Bernoulli distribution can be thought of like flipping a weighted coin, and the average in this case estimates the bias of the coin).Compared to the leaf nodes approach, Monte Carlo does a much better job at characterising this lake as uncertain. Essentially the whole lake has low certainty ($p(\mathrm{wet}) \approx 0.5$), rather than just a few pixels and the edge as in the leaf nodes approach. We also get smoother outputs, whereas the leaf nodes approach has only twice as many possible probabilistic outputs as there are leaf nodes. Virtual product for the leaf nodes approachThere's no good way to build a virtual product for the Monte Carlo approach, but we can build a virtual product for the leaf nodes approach. This will allow easy reuse of the method by letting us load it with a datacube-like API. The result will be a virtual product that describes the "confidence" of the WOfS predictions (distinct from the WOfS "confidence" product, which is a mostly unrelated concept outside the scope of this notebook). This product will let us easily examine the confidence in downstream applications.First we define the transformation:
###Code
# This code is heavily based on the actual WOfS implementation.
# Constants:
WOFS_OUTPUT = [
{"name": "water", "dtype": "uint8", "nodata": 1, "units": "1"},
{"name": "probability", "dtype": "float32", "nodata": np.nan, "units": "1"},
]
NO_DATA = (
1 << 0
) # (dec 1) bit 0: 1=pixel masked out due to NO_DATA in NBAR source, 0=valid data in NBAR
MASKED_CLOUD = 1 << 6 # (dec 64) bit 6: 1=pixel masked out due to cloud
MASKED_CLOUD_SHADOW = 1 << 5 # (dec 32) bit 5: 1=pixel masked out due to cloud shadow
YES_WATER = 1 << 7
# This function is straight out of WOfS and applies cloud mask.
def fmask_filter(fmask):
masking = np.zeros(fmask.shape, dtype=np.uint8)
masking[fmask == 0] += NO_DATA
masking[fmask == 2] += MASKED_CLOUD
masking[fmask == 3] += MASKED_CLOUD_SHADOW
return masking
# This class defines how the virtual product should be calculated from a given Landsat image.
class WOfSClassifier(Transformation):
def __init__(self):
# Define our output bands (water and probability).
self.output_measurements = {m["name"]: Measurement(**m) for m in WOFS_OUTPUT}
def measurements(self, input_measurements):
return self.output_measurements
def compute(self, data):
# Perform the WOfS transformation for each time step.
wofs = []
for time_idx in range(len(data.time)):
# Get the image at the current time.
data_time = data.isel(time=time_idx)
# Convert into the format wofs_classify expects.
nbar_bands = data_time[bands].to_array(dim="band")
# Apply the classification function.
water = xr.DataArray(
# Multiply by YES_WATER to set the WOfS water bit.
# Binary or with the cloud mask to set cloud bits.
(YES_WATER * wofs_classify(nbar_bands).astype(int))
| fmask_filter(data_time.fmask),
coords={"x": nbar_bands.coords["x"], "y": nbar_bands.coords["y"]},
dims=["y", "x"],
)
# Apply the probability estimation function.
# We can ignore cloud masking here: it's already covered by the water measurement.
probs = xr.DataArray(
(wofs_classify_marginal(nbar_bands)),
coords={"x": nbar_bands.coords["x"], "y": nbar_bands.coords["y"]},
dims=["y", "x"],
)
# Construct the dataset that contains water and probability measurements.
ds = xr.Dataset({"water": water, "probability": probs})
wofs.append(ds)
# Concatenate all time steps together along the time axis.
wofs = xr.concat(wofs, dim="time")
# And define the units of that time axis.
wofs.coords["time"] = data.coords["time"]
# Set the CRS to the input CRS.
wofs.attrs["crs"] = data.attrs["crs"]
# Set all values with no data to the no-data value.
nodata_set = np.bitwise_and(wofs.water.data, NO_DATA) == NO_DATA
wofs.water.data[nodata_set] = np.array(NO_DATA, dtype="uint8")
# We now have a product dataset!
return wofs
###Output
_____no_output_____
###Markdown
Then we build the transformation into a virtual product:
###Code
wofs_product = construct(
transform=WOfSClassifier,
input=dict(product=product, measurements=bands + ["fmask"]),
)
###Output
_____no_output_____
###Markdown
Now we can `.load` anything we like. Here's the waterbody from before:
###Code
wofs_loaded = wofs_product.load(
dc,
geopolygon=datacube.utils.geometry.Geometry(wb.geometry[0].buffer(500), crs=wb.crs),
time=date,
output_crs=best_crs,
resolution=(-30, 30),
resampling="nearest",
)
def plot_wofs(wofs_loaded):
"""This function plots a WOfS dataarray."""
xr.where(
# Set clouds to nan.
(wofs_loaded.water != MASKED_CLOUD)
& (wofs_loaded.water != MASKED_CLOUD_SHADOW),
# Calculate p(wet) rather than p(WOfS is correct).
xr.where(
wofs_loaded.water & 128,
wofs_loaded.probability,
1 - wofs_loaded.probability,
),
np.nan,
).isel(time=0).plot.imshow(
vmin=0, vmax=1, cmap="coolwarm_r", interpolation="nearest")
plt.title('Probability each pixel is wet')
plot_wofs(wofs_loaded)
###Output
_____no_output_____
###Markdown
And here's part of Lake Gordon:
###Code
wb_gordon = get_waterbody("r0rvh8fpb")
wofs_gordon = wofs_product.load(
dc,
geopolygon=datacube.utils.geometry.Geometry(
wb_gordon.geometry[0].buffer(500), crs=wb_gordon.crs
),
time="2001-12-28",
output_crs=best_crs,
resolution=(-30, 30),
resampling="cubic",
)
plot_wofs(wofs_gordon)
###Output
_____no_output_____
###Markdown
*** Additional information**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/GeoscienceAustralia/dea-notebooks).**Last modified:** November 2020**Compatible datacube version:**
###Code
print(datacube.__version__)
###Output
1.8.3
###Markdown
TagsBrowse all available tags on the DEA User Guide's [Tags Index](https://docs.dea.ga.gov.au/genindex.html)
###Code
**Tags**: :index:`NCI compatible`, :index:`sandbox compatible`, :index:`landsat 7`, :index:`water`, :index:`WOfS`, :index:`DEA Waterbodies`
###Output
_____no_output_____
###Markdown
Error Estimation for Water Observations from Space * [**Sign up to the DEA Sandbox**](https://docs.dea.ga.gov.au/setup/sandbox.html) to run this notebook interactively from a browser* **Compatibility:** Notebook currently compatible with both the `NCI` and `DEA Sandbox` environments* **Products used:** [wofs_albers](https://explorer.sandbox.dea.ga.gov.au/wofs_albers), [ga_ls8c_ard_3](https://explorer.sandbox.dea.ga.gov.au/ga_ls7e_ard_3),[DEA Waterbodies](https://www.ga.gov.au/dea/products/dea-waterbodies) DescriptionWater Observations from Space (WOfS) is a decision tree classifier that classifies Landsat pixels as either wet or dry. For some purposes, though, we'd like a little more information—for example, when WOfS is wrong, how close was it to predicting the correct answer? Does the chance that a pixel is wet decrease as you get closer to the edge of a waterbody? How does noise in Landsat 5 and 7 impact WOfS compared to Landsat 8 (which is far less noisy)? This notebook attempts to help answer these questions by introducing two variations on WOfS that act *probabilistically*: instead of making sharp decisions like wet/dry, they make estimates of the *probability* that each pixel is wet/dry. This is a number that can be anywhere between 0 and 1, where 0 is a confident dry classification and 1 is a confident wet classification.This notebook as a whole defines and applies these variant WOfS classifiers to a single waterbody. The definitions are direct reimplementations of [the WOfS classifier](https://github.com/GeoscienceAustralia/wofs/blob/master/wofs/classifier.py) with two different approaches to probability estimation:1. Monte Carlo2. Estimated marginals of leaf nodes.A Monte Carlo approach is when you draw random samples from a distribution to estimate some transformation of this distribution. In our case, we will add noise to Landsat images and see how this affects the WOfS predictions. A very uncertain classification will flip between wet and dry with just a little noise, whereas a more certain classification will be unchanged.The other approach uses the structure of the decision tree to estimate probabilities (see figure below). Each leaf node in the WOfS classifier has a label (wet or dry), but different numbers of training pixels are assigned to each leaf node. We can look at the distribution of truly wet and dry pixels in each leaf node and use this as an estimate for the accuracy in that node. Then, by tracking which leaf node each pixel getting classified ends up in, we can estimate how accurate that pixel classification is.*Figure 3 of Mueller et al., 2017, showing the WOfS decision tree.**** Getting startedChoose a waterbody in the "Analysis parameters" section and then run all cells. Load packagesImport Python packages that are used for the analysis.
###Code
%matplotlib inline
import sys
import datacube
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import xarray as xr
import geopandas as gpd
from datacube.virtual import construct, Transformation, Measurement
sys.path.append("../../../Scripts")
from dea_plotting import rgb
from dea_spatialtools import xr_rasterize
from dea_datahandling import mostcommon_crs
from dea_waterbodies import get_waterbody
###Output
_____no_output_____
###Markdown
Connect to the datacubeConnect to the datacube so we can access DEA data.The `app` parameter is a unique name for the analysis which is based on the notebook file name.
###Code
dc = datacube.Datacube(app="Error-Estimation-WOfS")
###Output
_____no_output_____
###Markdown
Analysis parametersSpecify the geohash for a waterbody:
###Code
# geohash = 'r38psere6' # Lake Cullivel, NSW
geohash = "r0xf89fch" # Lake Will, TAS
# geohash = 'r6ktp2tme' # Rosendahl Reservoir, NSW
# geohash = 'r3dp1nxh8' # Lake Burley Griffin, ACT
# geohash = 'r3f225n9h' # Weereewa, NSW
# geohash = 'r42yzdv98' # Lake Hart, SA
# geohash = 'r4m0nb20w' # Lake Menindee, NSW
# geohash = 'r4hg88vfn' # Lake Polpitah, NSW
###Output
_____no_output_____
###Markdown
A Landsat surface reflectance product:
###Code
product = "ga_ls7e_ard_3"
###Output
_____no_output_____
###Markdown
And a date with observations:
###Code
date = "2002-01-29"
###Output
_____no_output_____
###Markdown
Load the waterbody polygonWe can use `dea_waterbodies.get_waterbody` to get the polygon of any waterbody in DEA Waterbodies.
###Code
wb = get_waterbody(geohash)
wb.geometry[0]
###Output
_____no_output_____
###Markdown
Load a test imageWe'll load a Landsat image to test out our WOfS probabilities. Use the waterbody polygon as the location to query:
###Code
gpg = datacube.utils.geometry.Geometry(wb.geometry[0], crs=wb.crs)
###Output
_____no_output_____ |
Google Colaboratory/dl1/lesson2-image_models.ipynb | ###Markdown
Multi-label classification Added header for Google ColaboratoryReboot Colab VM
###Code
#!pkill -9 -f ipykernel_launcher
###Output
_____no_output_____
###Markdown
Install torch compatible with fastai:
###Code
from os import path
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'
!pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.3.1-{platform}-linux_x86_64.whl fastai torchvision
###Output
[31mtorchvision 0.2.1 has requirement pillow>=4.1.1, but you'll have pillow 4.0.0 which is incompatible.[0m
[31mplotnine 0.4.0 has requirement scipy>=1.0.0, but you'll have scipy 0.19.1 which is incompatible.[0m
###Markdown
Model weights for other network architectures (e.g. resnext50):
###Code
!wget -q http://files.fast.ai/models/weights.tgz && tar -xzf weights.tgz -C /usr/local/lib/python3.6/dist-packages/fastai
###Output
^C
###Markdown
Kaggle Dog Breed Identification. Get data from https://www.kaggle.com/c/dog-breed-identification
###Code
!pip install -q kaggle
!mkdir -p ~/.kaggle
!echo '{"username":"Your own Kaggle user name","key":"Your own Kaggle API key"}' > ~/.kaggle/kaggle.json
!chmod 600 ~/.kaggle/kaggle.json
!kaggle competitions download -c dog-breed-identification
!mkdir -p data/dogbreed
!unzip -q /content/labels.csv.zip -d data/dogbreed
!unzip -q /content/sample_submission.csv.zip -d data/dogbreed
!unzip -q /content/test.zip -d data/dogbreed
!unzip -q /content/train.zip -d data/dogbreed
!kaggle competitions download -c planet-understanding-the-amazon-from-space
PATH = 'data/planet/'
!mkdir -p {PATH}
!unzip -q /content/sample_submission_v2.csv.zip -d {PATH}
!unzip -q /content/train_v2.csv.zip -d {PATH}
!unzip -q /content/test_v2_file_mapping.csv.zip -d {PATH}
7za x <filename.tar.7z> This extracts 7z format and delivers an output <filename.tar>
tar xf <filename.tar>
###Output
_____no_output_____
###Markdown
Original version
###Code
%matplotlib inline
from fastai.conv_learner import *
# Data preparation steps if you are using Crestle:
os.makedirs('data/planet/models', exist_ok=True)
os.makedirs('/cache/planet/tmp', exist_ok=True)
!ln -s /datasets/kaggle/planet-understanding-the-amazon-from-space/train-jpg {PATH}
!ln -s /datasets/kaggle/planet-understanding-the-amazon-from-space/test-jpg {PATH}
!ln -s /datasets/kaggle/planet-understanding-the-amazon-from-space/train_v2.csv {PATH}
!ln -s /cache/planet/tmp {PATH}
ls {PATH}
###Output
ls: cannot access 'data/planet/': No such file or directory
###Markdown
Multi-label versus single-label classification
###Code
from fastai.plots import *
def get_1st(path): return glob(f'{path}/*.*')[0]
dc_path = "data/dogscats/valid/"
list_paths = [get_1st(f"{dc_path}cats"), get_1st(f"{dc_path}dogs")]
plots_from_files(list_paths, titles=["cat", "dog"], maintitle="Single-label classification")
###Output
_____no_output_____
###Markdown
In single-label classification each sample belongs to one class. In the previous example, each image is either a *dog* or a *cat*.
###Code
list_paths = [f"{PATH}train-jpg/train_0.jpg", f"{PATH}train-jpg/train_1.jpg"]
titles=["haze primary", "agriculture clear primary water"]
plots_from_files(list_paths, titles=titles, maintitle="Multi-label classification")
###Output
_____no_output_____
###Markdown
In multi-label classification each sample can belong to one or more classes. In the previous example, the first images belongs to two classes: *haze* and *primary*. The second image belongs to four classes: *agriculture*, *clear*, *primary* and *water*. Multi-label models for Planet dataset
###Code
from planet import f2
metrics=[f2]
f_model = resnet34
label_csv = f'{PATH}train_v2.csv'
n = len(list(open(label_csv)))-1
val_idxs = get_cv_idxs(n)
###Output
_____no_output_____
###Markdown
We use a different set of data augmentations for this dataset - we also allow vertical flips, since we don't expect vertical orientation of satellite images to change our classifications.
###Code
def get_data(sz):
tfms = tfms_from_model(f_model, sz, aug_tfms=transforms_top_down, max_zoom=1.05)
return ImageClassifierData.from_csv(PATH, 'train-jpg', label_csv, tfms=tfms,
suffix='.jpg', val_idxs=val_idxs, test_name='test-jpg')
data = get_data(256)
x,y = next(iter(data.val_dl))
y
list(zip(data.classes, y[0]))
plt.imshow(data.val_ds.denorm(to_np(x))[0]*1.4);
sz=64
data = get_data(sz)
data = data.resize(int(sz*1.3), 'tmp')
?data.resize
learn = ConvLearner.pretrained(f_model, data, metrics=metrics)
lrf=learn.lr_find()
learn.sched.plot()
lr = 0.2
learn.fit(lr, 3, cycle_len=1, cycle_mult=2)
lrs = np.array([lr/9,lr/3,lr])
learn.unfreeze()
learn.fit(lrs, 3, cycle_len=1, cycle_mult=2)
learn.save(f'{sz}')
learn.sched.plot_loss()
sz=128
learn.set_data(get_data(sz))
learn.freeze()
learn.fit(lr, 3, cycle_len=1, cycle_mult=2)
learn.unfreeze()
learn.fit(lrs, 3, cycle_len=1, cycle_mult=2)
learn.save(f'{sz}')
sz=256
learn.set_data(get_data(sz))
learn.freeze()
learn.fit(lr, 3, cycle_len=1, cycle_mult=2)
learn.unfreeze()
learn.fit(lrs, 3, cycle_len=1, cycle_mult=2)
learn.save(f'{sz}')
multi_preds, y = learn.TTA()
preds = np.mean(multi_preds, 0)
f2(preds,y)
###Output
_____no_output_____
###Markdown
End
###Code
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.