path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
downloaded_kernels/house_sales/kernel_78.ipynb
|
###Markdown
Table of contents1. [Introduction](introduction) 1. [Imports](imports) 2. [Loading data](load_data)2. [Exploratory Data Analysis](eda) 1. [Data info](data_info) 2. [Price distribution](price_distr) 3. [Feature vs price plots](feat_vs_price) 4. [Correlation matrix](corr_mat)3. [Data preparation](data_prep) 1. ['33 bedrooms' case](33bedrm) 2. [Outliers handling](outliers) 3. [Visualisations of data without outliers](expl2) 4. [Picking features and creating datasets](datasets) 5. [Data spliting to test and train samples](split)4. [Machine learning models](ml_intro) 1. [Linear regression](lr) 2. [KNeighbors](knn) 3. [RandomForest regression](rf)5. [Results overview](results) 1. [R$^{2}$ scores combined](r_comb) 2. [R$^{2}$ vs dataset for each model](r_vs_data)6. [Conclusions](concl) Introduction Data source and Column Metadata:https://www.kaggle.com/harlfoxem/housesalesprediction/data The purpose of this analysis was to practice SciKit-learn and Pandas. Imports
###Code
from __future__ import division
import pandas as pd
import sklearn
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import eli5
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
%matplotlib inline
plt.rcParams['figure.figsize'] = 12, 8 # universal plot size
pd.options.mode.chained_assignment = None # default='warn', disables pandas warnings about assigments
njobs = 2 # number of jobs
sbr_c = "#1156bf" # seaborn plot color
###Output
_____no_output_____
###Markdown
Loading Data
###Code
data = pd.read_csv('../input/kc_house_data.csv', iterator=False, parse_dates=['date'])
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis Basic data voerview
###Code
data.head(10) # to see the columns and first 10 rows
data.info() # overview of the data
###Output
_____no_output_____
###Markdown
Year of pricing distribution. In case of greater variance of data I would consider removing the older records.
###Code
data['date'].dt.year.hist()
plt.title('Year of pricing distribution')
plt.show()
data.describe() # overview of the data
###Output
_____no_output_____
###Markdown
Price distribution
###Code
data['price'].hist(xrot=30, bins=500)
plt.title('Price distribution')
plt.show()
###Output
_____no_output_____
###Markdown
Feature vs price plots
###Code
fig, ((ax1, ax2), (ax3, ax4), (ax5, ax6)) = plt.subplots(3, 2, figsize = (12, 15))
sns.stripplot(x = "grade", y = "price", data = data, jitter=True, ax = ax1, color=sbr_c)
sns.stripplot(x = "view", y = "price", data = data, jitter=True, ax = ax2, color=sbr_c)
sns.stripplot(x = "bedrooms", y = "price", data = data, jitter=True, ax = ax3, color=sbr_c)
sns.stripplot(x = "bathrooms", y = "price", data = data, jitter=True, ax = ax4, color=sbr_c)
sns.stripplot(x = "condition", y = "price", data = data, jitter=True, ax = ax5, color=sbr_c)
sns.stripplot(x = "floors", y = "price", data = data, jitter=True, ax = ax6, color=sbr_c)
ax4.set_xticklabels(ax4.get_xticklabels(), rotation=60)
for i in range(1,7):
a = eval('ax'+str(i))
a.set_yscale('log')
plt.tight_layout()
fig, ((ax1, ax2), (ax3, ax4), (ax5, ax6)) = plt.subplots(3, 2, figsize = (12, 12))
sns.regplot(x = 'sqft_living', y = 'price', data = data, ax = ax1, fit_reg=False, scatter_kws={"s": 1})
sns.regplot(x = 'sqft_lot', y = 'price', data = data, ax = ax2, fit_reg=False, scatter_kws={"s": 1})
sns.regplot(x = 'yr_built', y = 'price', data = data, ax = ax5, fit_reg=False, scatter_kws={"s": 1})
sns.regplot(x = 'sqft_basement', y = 'price', data = data, ax = ax6, fit_reg=False, scatter_kws={"s": 1})
sns.regplot(x = 'lat', y = 'price', data = data, ax = ax3, fit_reg=False, scatter_kws={"s": 1})
sns.regplot(x = 'long', y = 'price', data = data, ax = ax4, fit_reg=False, scatter_kws={"s": 1})
ax6.set_xlim([-100, max(data['sqft_basement'])]) # 6th plot has broken xscale
for i in range(1,7):
a = eval('ax'+str(i))
a.set_yscale('log')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Correlations matrix
###Code
corrmat = data.corr() # correlations between features
f, ax = plt.subplots(figsize=(16,16))
sns.heatmap(corrmat, square = True, cmap = 'RdBu_r', vmin = -1, vmax = 1, annot=True, fmt='.2f', ax = ax)
###Output
_____no_output_____
###Markdown
Data preparation "33 bedrooms" case By taking a look at data.describe() one can see house with 33 bedrooms, which seem to be strange in compare to the others. In the next few lines I will try to examine this case. I guess that's a typo and it should be "3" instead of "33".
###Code
# selecting house with 33 bedrooms
myCase = data[data['bedrooms']==33]
myCase
# data without '33 bedrooms' house
theOthers = data[data['bedrooms']!=33]
theOtherStats = theOthers.describe()
theOtherStats
newDf = theOthers[['bedrooms', 'bathrooms', 'sqft_living']]
newDf = newDf[(newDf['bedrooms'] > 0) & (newDf['bathrooms'] > 0)]
newDf['bathrooms/bedrooms'] = newDf['bathrooms']/newDf['bedrooms']
newDf['sqft_living/bedrooms'] = newDf['sqft_living']/newDf['bedrooms']
newDf['bathrooms/bedrooms'].hist(bins=20)
plt.title('bathrooms/bedrooms ratio distribution')
plt.show()
newDf['sqft_living/bedrooms'].hist(bins=20)
plt.title('sqft_living/bedrooms ratio distribution')
plt.show()
###Output
_____no_output_____
###Markdown
Bathrooms/Bedrooms ratio
###Code
# values for other properties
othersMeanBB = np.mean(newDf['bathrooms/bedrooms']) # mean bathroom/bedroom ratio
othersStdBB = np.std(newDf['bathrooms/bedrooms']) # std of bathroom/bedroom ratio
# values for suspicious house: myCase - real data; myCase2 - if there would be 3 bedrooms
myCaseBB = float(myCase['bathrooms'])/float(myCase['bedrooms'])
myCase2BB = float(myCase['bathrooms'])/3. # if there would be 3 bedrooms
print ('{:10}: {:6.3f} bathroom per bedroom'.format('"33" case', myCaseBB))
print ('{:10}: {:6.3f} bathroom per bedroom'.format('"3" case', myCase2BB))
print ('{:10}: {:6.3f} (std: {:.3f}) bathroom per bedroom'.format('The others', othersMeanBB, othersStdBB))
###Output
_____no_output_____
###Markdown
sqft_living/Bedrooms ratio
###Code
# values for other properties
othersMeanSB = np.mean(newDf['sqft_living/bedrooms']) # mean sqft_living/bedroom ratio
othersStdSB = np.std(newDf['sqft_living/bedrooms']) # std of sqft_living/bedroom ratio
# values for suspicious house: myCase - real data; myCase2 - if there would be 3 bedrooms
myCaseSB = float(myCase['sqft_living'])/float(myCase['bedrooms'])
myCase2SB = float(myCase['sqft_living'])/3. # if there would be 3 bedrooms
print ('{:10}: {:6.0f} sqft per bedroom'.format('"33" case', myCaseSB))
print ('{:10}: {:6.0f} sqft per bedroom'.format('"3" case', myCase2SB))
print ('{:10}: {:6.0f} (std: {:.0f}) sqft per bedroom'.format('The others', othersMeanSB, othersStdSB))
###Output
_____no_output_____
###Markdown
Conclusion:"House with 33 bedrooms" dosen't look realistic. It will be discarded from the dataset.
###Code
toDropIndex = myCase.index
data.drop(toDropIndex, inplace=True)
stats = data.describe()
stats
###Output
_____no_output_____
###Markdown
Outliers handling Figures show that there are some outliers in data.Data2 is 2nd dataset with arbitrary excluded outliers. Data2 will contain rows that's price do not differ from the mean price by more than 3 std.
###Code
data2 = data[np.abs(data['price'] - stats['price']['mean']) <= (3*stats['price']['std'])] # cutting 'price'
###Output
_____no_output_____
###Markdown
Visualisations of data without otliers
###Code
data2.describe()
sns.regplot(x = "sqft_living", y = "price", data = data2, fit_reg=False, scatter_kws={"s": 2})
fig, ((ax1, ax2), (ax3, ax4), (ax5, ax6)) = plt.subplots(3, 2, figsize = (12, 15))
sns.stripplot(x = "grade", y = "price", data = data2, jitter=True, ax = ax1, color=sbr_c)
sns.stripplot(x = "view", y = "price", data = data2, jitter=True, ax = ax2, color=sbr_c)
sns.stripplot(x = "bedrooms", y = "price", data = data2, jitter=True, ax = ax3, color=sbr_c)
sns.stripplot(x = "bathrooms", y = "price", data = data2, jitter=True, ax = ax4, color=sbr_c)
sns.stripplot(x = "condition", y = "price", data = data2, jitter=True, ax = ax5, color=sbr_c)
sns.stripplot(x = "floors", y = "price", data = data2, jitter=True, ax = ax6, color=sbr_c)
ax4.set_xticklabels(ax4.get_xticklabels(), rotation=45)
for i in range(1,7):
a = eval('ax'+str(i))
a.set_yscale('log')
plt.tight_layout()
fig, ((ax1, ax2), (ax3, ax4), (ax5, ax6)) = plt.subplots(3, 2, figsize = (12, 12))
sns.regplot(x = 'sqft_living', y = 'price', data = data2, ax = ax1, fit_reg=False, scatter_kws={"s": 1})
sns.regplot(x = 'sqft_lot', y = 'price', data = data2, ax = ax2, fit_reg=False, scatter_kws={"s": 1})
sns.regplot(x = 'yr_built', y = 'price', data = data2, ax = ax5, fit_reg=False, scatter_kws={"s": 1})
sns.regplot(x = 'sqft_basement', y = 'price', data = data2, ax = ax6, fit_reg=False, scatter_kws={"s": 1})
sns.regplot(x = 'lat', y = 'price', data = data2, ax = ax3, fit_reg=False, scatter_kws={"s": 1})
sns.regplot(x = 'long', y = 'price', data = data2, ax = ax4, fit_reg=False, scatter_kws={"s": 1})
ax6.set_xlim([-100, max(data2['sqft_basement'])]) # 6th plot has broken xscale
for i in range(1,7):
a = eval('ax'+str(i))
a.set_yscale('log')
plt.tight_layout()
fig, ((ax1, ax2)) = plt.subplots(1, 2, figsize = (12, 6))
sns.regplot(x = 'sqft_basement', y = 'sqft_living', data = data2, ax = ax1, fit_reg=False, scatter_kws={"s": 1})
sns.regplot(x = 'sqft_above', y = 'sqft_living', data = data2, ax = ax2, fit_reg=False, scatter_kws={"s": 1})
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Picking features and creating datasets First we should pick features that we will put into model. Correlation matrix presented above might be helpful while making this decision. From listed features I would use:* basement* bathrooms* bedrooms* grade* sqft_living* sqft_lot* waterfront* view'sqft_basement' and 'sqft_above' seem to be connected with 'sqft_living', so taking into account only 'sqft_living' should work. In case of 'sqft_basement' I will change int value of area to int (0, 1) value indicating whether estate has basement or not.
###Code
data['basement'] = data['sqft_basement'].apply(lambda x: 1 if x > 0 else 0)
data2['basement'] = data2['sqft_basement'].apply(lambda x: 1 if x > 0 else 0)
data2.head(10)
# removing unnecessary features
dataRaw = data.copy(deep=True)
dataRaw.drop(['date', 'id'], axis = 1, inplace=True)
dataSel1 = data[['price', 'basement', 'bathrooms', 'bedrooms', 'grade', 'sqft_living', 'sqft_lot', 'waterfront', 'view']]
dataSel2 = data2[['price', 'basement', 'bathrooms', 'bedrooms', 'grade', 'sqft_living', 'sqft_lot', 'waterfront', 'view']]
###Output
_____no_output_____
###Markdown
Data spliting to test and train samples
###Code
# random_state=seed fixes RNG seed. 80% of data will be used for training, 20% for testing.
seed = 2
splitRatio = 0.2
# data with outliers, only columns selected manually
train, test = train_test_split(dataSel1, test_size=splitRatio, random_state=seed)
Y_trn1 = train['price'].tolist()
X_trn1 = train.drop(['price'], axis=1)
Y_tst1 = test['price'].tolist()
X_tst1 = test.drop(['price'], axis=1)
# data without outliers, only columns selected manually
train2, test2 = train_test_split(dataSel2, test_size=splitRatio, random_state=seed)
Y_trn2 = train2['price'].tolist()
X_trn2 = train2.drop(['price'], axis=1)
Y_tst2 = test2['price'].tolist()
X_tst2 = test2.drop(['price'], axis=1)
# data with outliers and all meaningful columns (date and id excluded)
trainR, testR = train_test_split(dataRaw, test_size=splitRatio, random_state=seed)
Y_trnR = trainR['price'].tolist()
X_trnR = trainR.drop(['price'], axis=1)
Y_tstR = testR['price'].tolist()
X_tstR = testR.drop(['price'], axis=1)
X_trnR.head()
###Output
_____no_output_____
###Markdown
Machine learning models Linear regression
###Code
modelLRR = LinearRegression(n_jobs=njobs)
modelLR1 = LinearRegression(n_jobs=njobs)
modelLR2 = LinearRegression(n_jobs=njobs)
modelLRR.fit(X_trnR, Y_trnR)
modelLR1.fit(X_trn1, Y_trn1)
modelLR2.fit(X_trn2, Y_trn2)
scoreR = modelLRR.score(X_tstR, Y_tstR)
score1 = modelLR1.score(X_tst1, Y_tst1)
score2 = modelLR2.score(X_tst2, Y_tst2)
print ("R^2 score: {:8.4f} for {}".format(scoreR, 'Raw data'))
print ("R^2 score: {:8.4f} for {}".format(score1, 'Dataset 1 (with outliers)'))
print ("R^2 score: {:8.4f} for {}".format(score2, 'Dataset 2 (without outliers)'))
lrDict = {'Dataset': ['Raw data', 'Dataset 1', 'Dataset 2'],
'R^2 score': [scoreR, score1, score2],
'Best params': [None, None, None]}
pd.DataFrame(lrDict)
lr = LinearRegression(n_jobs=njobs, normalize=True)
lr.fit(X_trnR, Y_trnR)
###Output
_____no_output_____
###Markdown
Extracting weights of features (not normalized)
###Code
weights = eli5.explain_weights_df(lr) # weights of LinearRegression model for RawData
rank = [int(i[1:]) for i in weights['feature'].values[1:]]
labels = ['BIAS'] + [X_trnR.columns[i] for i in rank]
weights['feature'] = labels
weights
###Output
_____no_output_____
###Markdown
KNeighbors KNeighbors Regressor requires more parameters than Linear Regression, so using GridSearchCV to tune hyperparameters seem to be good idea.
###Code
tuned_parameters = {'n_neighbors': range(1,21), 'weights': ['uniform', 'distance']}
knR = GridSearchCV(KNeighborsRegressor(), tuned_parameters, n_jobs=njobs)
kn1 = GridSearchCV(KNeighborsRegressor(), tuned_parameters, n_jobs=njobs)
kn2 = GridSearchCV(KNeighborsRegressor(), tuned_parameters, n_jobs=njobs)
knR.fit(X_trnR, Y_trnR)
kn1.fit(X_trn1, Y_trn1)
kn2.fit(X_trn2, Y_trn2)
scoreR = knR.score(X_tstR, Y_tstR)
score1 = kn1.score(X_tst1, Y_tst1)
score2 = kn2.score(X_tst2, Y_tst2)
parR = knR.best_params_
par1 = kn1.best_params_
par2 = kn2.best_params_
print ("R^2: {:6.4f} {:12} | Params: {}".format(scoreR, 'Raw data', parR))
print ("R^2: {:6.4f} {:12} | Params: {}".format(score1, 'Dataset 1', par1))
print ("R^2: {:6.4f} {:12} | Params: {}".format(score2, 'Dataset 2', par2))
knDict = {'Dataset': ['Raw data', 'Dataset 1', 'Dataset 2'],
'R^2 score': [scoreR, score1, score2],
'Best params': [parR, par1, par2]}
pd.DataFrame(knDict)
###Output
_____no_output_____
###Markdown
RandomForest regression As in the previous case using GridSearchCV will help with tunning hyperparameters.
###Code
tuned_parameters = {'n_estimators': [10,20,50,100], 'max_depth': [10,20,50]}
rfR = GridSearchCV(RandomForestRegressor(), tuned_parameters, n_jobs=njobs)
rf1 = GridSearchCV(RandomForestRegressor(), tuned_parameters, n_jobs=njobs)
rf2 = GridSearchCV(RandomForestRegressor(), tuned_parameters, n_jobs=njobs)
rfR.fit(X_trnR, Y_trnR)
rf1.fit(X_trn1, Y_trn1)
rf2.fit(X_trn2, Y_trn2)
scoreR = rfR.score(X_tstR, Y_tstR)
score1 = rf1.score(X_tst1, Y_tst1)
score2 = rf2.score(X_tst2, Y_tst2)
parR = rfR.best_params_
par1 = rf1.best_params_
par2 = rf2.best_params_
print ("R^2: {:6.4f} {:12} | Params: {}".format(scoreR, 'Raw data', parR))
print ("R^2: {:6.4f} {:12} | Params: {}".format(score1, 'Dataset 1', par1))
print ("R^2: {:6.4f} {:12} | Params: {}".format(score2, 'Dataset 2', par2))
rfDict = {'Dataset': ['Raw data', 'Dataset 1', 'Dataset 2'],
'R^2 score': [scoreR, score1, score2],
'Best params': [parR, par1, par2]}
pd.DataFrame(rfDict)
###Output
_____no_output_____
###Markdown
Checking feature importances in Random Forest Regressor model.
###Code
rf = RandomForestRegressor(n_estimators=100, max_depth=50, n_jobs=njobs)
rf.fit(X_trnR, Y_trnR)
importances = rf.feature_importances_
# calculating std by collecting 'feature_importances_' from every tree in forest
rfStd = np.std([tree.feature_importances_ for tree in rf.estimators_], axis=0)
indices = np.argsort(importances)[::-1] # descending order
xlabels = [X_trnR.columns[i] for i in indices]
plt.title("Random Forest: Mean feature importances with STD")
plt.bar(range(len(xlabels)), importances[indices],
color="#1156bf", yerr=rfStd[indices], align="center", capsize=8)
plt.xticks(rotation=45)
plt.xticks(range(len(xlabels)), xlabels)
plt.xlim([-1, len(xlabels)])
plt.show()
# feature importance for RandomForest with the best params tunnedy by GridSearchCV calculated by eli5
weights = eli5.explain_weights_df(rf)
rank = [int(i[1:]) for i in weights['feature'].values]
labels = [X_trnR.columns[i] for i in rank]
weights['feature'] = labels
weights
###Output
_____no_output_____
###Markdown
Results overview lr: LinearRegression kn: KNeighborsRegressor rf: RandomForestRegressor
###Code
resDict = {'lr' : lrDict, 'kn' : knDict, 'rf' : rfDict}
dict_of_df = {k: pd.DataFrame(v) for k,v in resDict.items()}
resDf = pd.concat(dict_of_df, axis=0)
resDf
###Output
_____no_output_____
###Markdown
R$^{2}$ scores combined
###Code
toPlot = resDf.sort_values(by=['R^2 score'], ascending=False)
fig, axes = plt.subplots(ncols=1, figsize=(12, 8))
toPlot['R^2 score'].plot(ax=axes, kind='bar', title='R$^{2}$ score', color="#1153ff")
plt.ylabel('R$^{2}$', fontsize=20)
plt.xlabel('Model & Dataset', fontsize=20)
plt.xticks(rotation=45)
plt.show()
###Output
_____no_output_____
###Markdown
R$^{2}$ vs dataset for each model
###Code
toPlot = resDf.sort_values(by=['R^2 score'], ascending=False)
fig, axes = plt.subplots(ncols=1, figsize=(12, 8))
toPlot.loc['lr']['R^2 score'].plot(ax=axes, kind='bar', title='R$^{2}$ score for Linear Regression', color="#1153ff")
plt.ylabel('R$^{2}$', fontsize=20)
plt.xlabel('Dataset', fontsize=20)
plt.xticks(rotation=45)
plt.xticks(range(3), [toPlot.loc['lr']['Dataset'][i] for i in range(3)])
plt.show()
toPlot = resDf.sort_values(by=['R^2 score'], ascending=False)
fig, axes = plt.subplots(ncols=1, figsize=(12, 8))
toPlot.loc['kn']['R^2 score'].plot(ax=axes, kind='bar', title='R$^{2}$ score for KNeighbors', color="#1153ff")
plt.ylabel('R$^{2}$', fontsize=20)
plt.xlabel('Dataset', fontsize=20)
plt.xticks(rotation=45)
plt.xticks(range(3), [toPlot.loc['kn']['Dataset'][i] for i in range(3)])
plt.show()
toPlot = resDf.sort_values(by=['R^2 score'], ascending=False)
fig, axes = plt.subplots(ncols=1, figsize=(12, 8))
toPlot.loc['rf']['R^2 score'].plot(ax=axes, kind='bar', title='R$^{2}$ score for Random Forest', color="#1153ff")
plt.ylabel('R$^{2}$', fontsize=20)
plt.xlabel('Dataset', fontsize=20)
plt.xticks(rotation=45)
plt.xticks(range(3), [toPlot.loc['rf']['Dataset'][i] for i in range(3)])
plt.show()
###Output
_____no_output_____
|
notebooks/CPG/04_Operations_Layer.ipynb
|
###Markdown
Getting StartedML Ops is gaining a lot of popularity. This example showcases a key piece you can use to construct your automation pipeline. As we can see in the following architecture diagram, you will be deploying an AWS Step Funciton Workflow containing AWS Lambda functions that call Amazon S3, Amazon Personalize, and Amazon SNS APIs.This package contains the source code of a Step Functions pipeline that is able to perform multiple actions within **Amazon Personalize**, including the following:- Dataset Group creation- Datasets creation and import- Solution creation- Solution version creation- Campaign creationOnce the steps are completed, the step functions notifies the users of its completion through theuse of an SNS topic.The below diagram describes the architecture of the solution:The below diagram showcases the StepFunction workflow definition: Prerequisites Installing AWS SAMThe AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. With just a few lines per resource, you can define the application you want and model it using YAML. During deployment, SAM transforms and expands the SAM syntax into AWS CloudFormation syntax, enabling you to build serverless applications faster.**Install** the [AWS SAM CLI](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html). This will install the necessary tools to build, deploy, and locally test your project. In this particular example we will be using AWS SAM to build and deploy only. For additional information please visit our [documentation](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html).**Note:** We have pre-installed SAM CLI in this notebooks through a cloudformation life cycle policy configLet's check what version of SAM we have installed
###Code
!sam --version
###Output
_____no_output_____
###Markdown
Directory StructureLet's take a look at directory structure We have a couple artifacts that we will be using to build our MLOps pipeline.
###Code
!ls /home/ec2-user/SageMaker/amazon-personalize-immersion-day/automation/ml_ops
###Output
_____no_output_____
###Markdown
**`ml_ops/domain`*** This directory contains the configuration file and sample data based on the domain. In this example we are going to be using the Retail domain
###Code
!ls /home/ec2-user/SageMaker/amazon-personalize-immersion-day/automation/ml_ops/domain
###Output
_____no_output_____
###Markdown
**`ml_ops/lambdas`*** This directory contains all the code that will be going into the lambda functions, these lambda functions will become a step inside the stepfunctions state machine we will deploy
###Code
!ls /home/ec2-user/SageMaker/amazon-personalize-immersion-day/automation/ml_ops/lambdas
###Output
_____no_output_____
###Markdown
**`ml_ops/template.yaml`*** This is our SAM template that will deploy the automation into our account, here we are printing just the head
###Code
!head /home/ec2-user/SageMaker/amazon-personalize-immersion-day/automation/ml_ops/template.yaml
###Output
_____no_output_____
###Markdown
DeployingIn order to deploy the project you will need to run the following commands:
###Code
!cd /home/ec2-user/SageMaker/amazon-personalize-immersion-day/automation/ml_ops/; sam build
!cd /home/ec2-user/SageMaker/amazon-personalize-immersion-day/automation/ml_ops/; sam deploy --template-file template.yaml --stack-name notebook-automation --capabilities CAPABILITY_IAM --s3-bucket $(aws cloudformation describe-stack-resources --stack-name AmazonPersonalizeImmersionDay --logical-resource-id SAMArtifactsBucket --query "StackResources[0].PhysicalResourceId" --output text)
###Output
_____no_output_____
###Markdown
Uploading data Let's get the bucket that our cloudformation deployed. We will be uploading our data to this bucket, plus the configuration file to trigger the automation
###Code
bucket = !aws cloudformation describe-stacks --stack-name notebook-automation --query "Stacks[0].Outputs[?OutputKey=='InputBucketName'].OutputValue" --output text
bucket_name = bucket[0]
print(bucket_name)
###Output
_____no_output_____
###Markdown
Now that we have the bucket name, lets copy over our Media data so we can explore and upload to S3
###Code
!cp -R /home/ec2-user/SageMaker/amazon-personalize-immersion-day/automation/ml_ops/domain/CPG ./example
# Import Dependencies
import boto3
import json
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import time
import requests
import csv
import sys
import botocore
import uuid
from collections import defaultdict
import random
import numpy as np
from packaging import version
from botocore.exceptions import ClientError
from pathlib import Path
%matplotlib inline
# Setup Clients
personalize = boto3.client('personalize')
personalize_runtime = boto3.client('personalize-runtime')
personalize_events = boto3.client('personalize-events')
# We will upload our training data in these files:
raw_items_filename = "example/data/Items/items.csv" # Do Not Change
raw_users_filename = "example/data/Users/users.csv" # Do Not Change
raw_interactions_filename = "example/data/Interactions/interactions.csv" # Do Not Change
items_filename = "items.csv" # Do Not Change
users_filename = "users.csv" # Do Not Change
interactions_filename = "interactions.csv" # Do Not Change
interactions_df = pd.read_csv(raw_interactions_filename)
interactions_df.head()
###Output
_____no_output_____
###Markdown
There are 2 ways of uploading your datasets to S3:1. Using the boto3 SDK1. Using the CLIIn this example we are going to use the CLI command
###Code
!aws s3 sync ./example/data s3://$bucket_name
###Output
_____no_output_____
###Markdown
Starting the State Machine Execution In order to execute the MLOps pipeline we need to provide a parameters file that will tell our state machine which names and configurations we want in our Amazon Personalize deployment.We have prepared a parameters.json file, let's explore it
###Code
with open('example/params.json') as f:
data = json.load(f)
print(json.dumps(data, indent=4, sort_keys=True, default=str))
###Output
_____no_output_____
###Markdown
This parameters file is set up to run at the beginning of this workshop. So let's modify a couple fields to make sure we are not overwritting our previous deployment
###Code
# Dataset Groups
data['datasetGroup']['name'] = 'notebook-automation'
# Datasets
data['datasets']['Interactions']['name'] = 'na-interactions-ds'
data['datasets']['Users']['name'] = 'na-users-ds'
data['datasets']['Items']['name'] = 'na-items-ds'
# Solutions
data['solutions']['personalizedRanking']['name'] = 'na-personalizedRankingCampaign'
data['solutions']['sims']['name'] = 'na-simsCampaign'
data['solutions']['userPersonalization']['name'] = 'na-userPersonalizationCampaign'
# Campaigns
data['campaigns']['personalizedRankingCampaign']['name'] = 'na-personalizedRankingCampaign'
data['campaigns']['simsCampaign']['name'] = 'na-simsCampaign'
data['campaigns']['userPersonalizationCampaign']['name'] = 'na-userPersonalizationCampaign'
# Event Tracker
data['eventTracker']['name'] = 'na-eventTracker'
print(json.dumps(data, indent=4, sort_keys=True))
###Output
_____no_output_____
###Markdown
Updating and uploading your parameters file to S3 First let's write the file locally
###Code
with open('example/params.json', 'w') as outfile:
json.dump(data, outfile)
###Output
_____no_output_____
###Markdown
Now we can upload this file to S3, we are going to be using the CLI to do so
###Code
!aws s3 cp ./example/params.json s3://$bucket_name
###Output
_____no_output_____
###Markdown
Validating the deploymentSo far we have deployed the automation required lets take a look at the stepfunctions execution
###Code
client = boto3.client('stepfunctions')
stateMachineArn = !aws cloudformation describe-stacks --stack-name notebook-automation --query "Stacks[0].Outputs[?OutputKey=='DeployStateMachineArn'].OutputValue" --output text
stateMachineArn= stateMachineArn[0]
describe_response = client.describe_state_machine(
stateMachineArn=stateMachineArn
)
print(json.dumps(describe_response, indent=4, sort_keys=True, default=str))
executions_response = client.list_executions(
stateMachineArn=stateMachineArn,
statusFilter='SUCCEEDED',
maxResults=2
)
print(json.dumps(executions_response, indent=4, sort_keys=True, default=str))
###Output
_____no_output_____
###Markdown
Let's look at the succeeded executionOnce your step functions are done executing, you can list the executions and describe them
###Code
executions_response = client.list_executions(
stateMachineArn=stateMachineArn,
statusFilter='SUCCEEDED',
maxResults=2
)
print(json.dumps(executions_response, indent=4, sort_keys=True, default=str))
describe_executions_response = client.describe_execution(
executionArn=executions_response['executions'][0]['executionArn']
)
print(json.dumps(describe_executions_response, indent=4, sort_keys=True, default=str))
###Output
_____no_output_____
###Markdown
Let's look at the input that was delivered to the State Machine As we can see below, this is the input from our Parameters file we uploaded to S3. This input json was then passed to lambda functions in the state machine to utilize across Amazon Personalize APIs
###Code
print(json.dumps(json.loads(describe_executions_response['input']), indent=4, sort_keys=True, default=str))
###Output
_____no_output_____
###Markdown
Let's look at the time stamps As we can see below, this is the input from our Parameters file we uploaded to S3. This input json was then passed to lambda functions in the state machine to utilize across Amazon Personalize APIs
###Code
print("Start Date:")
print(json.dumps(describe_executions_response['startDate'], indent=4, sort_keys=True, default=str))
print("Stop Date:")
print(json.dumps(describe_executions_response['stopDate'], indent=4, sort_keys=True, default=str))
print("Elapsed Time: ")
elapsed_time = describe_executions_response['stopDate'] - describe_executions_response['startDate']
print(elapsed_time)
###Output
_____no_output_____
###Markdown
Getting StartedML Ops is gaining a lot of popularity. This example showcases a key piece you can use to construct your automation pipeline. As we can see in the following architecture diagram, you will be deploying an AWS Step Funciton Workflow containing AWS Lambda functions that call Amazon S3, Amazon Personalize, and Amazon SNS APIs.This package contains the source code of a Step Functions pipeline that is able to perform multiple actions within **Amazon Personalize**, including the following:- Dataset Group creation- Datasets creation and import- Solution creation- Solution version creation- Campaign creationOnce the steps are completed, the step functions notifies the users of its completion through theuse of an SNS topic.The below diagram describes the architecture of the solution:The below diagram showcases the StepFunction workflow definition: Uploading data Let's get the bucket that our cloudformation deployed. We will be uploading our data to this bucket, plus the configuration file to trigger the automation
###Code
bucket = !aws cloudformation describe-stacks --stack-name id-ml-ops --query "Stacks[0].Outputs[?OutputKey=='InputBucketName'].OutputValue" --output text
bucket_name = bucket[0]
print(bucket_name)
###Output
_____no_output_____
###Markdown
Now that we have the bucket name, lets copy over our CPG data so we can explore and upload to S3
###Code
!cp -R /home/ec2-user/SageMaker/amazon-personalize-immersion-day/automation/ml_ops/domain/CPG ./example
# Import Dependencies
import boto3
import json
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import time
import requests
import csv
import sys
import botocore
import uuid
from collections import defaultdict
import random
import numpy as np
from packaging import version
from botocore.exceptions import ClientError
from pathlib import Path
%matplotlib inline
# Setup Clients
personalize = boto3.client('personalize')
personalize_runtime = boto3.client('personalize-runtime')
personalize_events = boto3.client('personalize-events')
# We will upload our training data in these files:
raw_items_filename = "example/data/Items/items.csv" # Do Not Change
raw_users_filename = "example/data/Users/users.csv" # Do Not Change
raw_interactions_filename = "example/data/Interactions/interactions.csv" # Do Not Change
items_filename = "items.csv" # Do Not Change
users_filename = "users.csv" # Do Not Change
interactions_filename = "interactions.csv" # Do Not Change
interactions_df = pd.read_csv(raw_interactions_filename)
interactions_df.head()
###Output
_____no_output_____
###Markdown
There are 2 ways of uploading your datasets to S3:1. Using the boto3 SDK1. Using the CLIIn this example we are going to use the CLI command
###Code
!aws s3 sync ./example/data s3://$bucket_name
###Output
_____no_output_____
###Markdown
Starting the State Machine Execution In order to execute the MLOps pipeline we need to provide a parameters file that will tell our state machine which names and configurations we want in our Amazon Personalize deployment.Let's create a parameters.json file and define our Amazon Personalize resources we want our MLOps pipeline to deploy
###Code
params = {
"datasetGroup": {
"name": "AP-ML-Ops-1"
},
"datasets": {
"Interactions": {
"name": "InteractionsDataset",
"schema": {
"fields": [
{
"name": "USER_ID",
"type": "string"
},
{
"name": "ITEM_ID",
"type": "string"
},
{
"name": "EVENT_TYPE",
"type": "string"
},
{
"name": "TIMESTAMP",
"type": "long"
}
],
"name": "Interactions",
"namespace": "com.amazonaws.personalize.schema",
"type": "record",
"version": "1.0"
}
},
"Items": {
"name": "ItemsDataset",
"schema": {
"fields": [
{
"name": "ITEM_ID",
"type": "string"
},
{
"categorical": True,
"name": "GENRE",
"type": "string"
},
{
"name": "YEAR",
"type": "int"
}
],
"name": "Items",
"namespace": "com.amazonaws.personalize.schema",
"type": "record",
"version": "1.0"
}
}
},
"solutions": {
"sims": {
"name": "na-simsCampaign-1",
"recipeArn": "arn:aws:personalize:::recipe/aws-sims"
}
},
"campaigns": {
"simsCampaign": {
"minProvisionedTPS": 1,
"name": "na-simsCampaign-1"
}
},
"eventTracker": {
"name": "AutomationImmersionDayEventTracker-1"
}
}
print(json.dumps(params, indent=4, sort_keys=True))
###Output
_____no_output_____
###Markdown
This parameters file will create a dataset group containing a campaign exposing a solution trained with the user-personalization recipe Updating and uploading your parameters file to S3 First let's write the file locally
###Code
with open('example/params.json', 'w') as outfile:
json.dump(params, outfile)
###Output
_____no_output_____
###Markdown
Now we can upload this file to S3, we are going to be using the CLI to do so
###Code
!aws s3 cp ./example/params.json s3://$bucket_name
###Output
_____no_output_____
###Markdown
Validating your MLOps pipelineLets take a look at the stepfunctions execution.
###Code
client = boto3.client('stepfunctions')
stateMachineArn = !aws cloudformation describe-stacks --stack-name id-ml-ops --query "Stacks[0].Outputs[?OutputKey=='DeployStateMachineArn'].OutputValue" --output text
stateMachineArn= stateMachineArn[0]
stateMachineArn
executions_response = client.list_executions(
stateMachineArn=stateMachineArn,
statusFilter='RUNNING',
maxResults=2
)
print(json.dumps(executions_response, indent=4, sort_keys=True, default=str))
###Output
_____no_output_____
###Markdown
This step will take at least 30 minutes to complete. You can check the status of the state machine execution in the console by:1. Navigate to the [Step Functions console](https://console.aws.amazon.com/states/home). 2. Click on the number **1** under the **Running** column3. Click on the **current execution** that is named after the date4. Here you can see which steps are currently executing highlighted in blueThis example step function definition will automatically retry each step by querying the describe service APIs with a backoff rate of 1.5, in each retry a new lambda function is executed looking for a success or a failure of a given step.These step functions will take around 20 minutes to finish executing, which includes importing the datasets, trainign a SIMS solution, and deploying a campaing. **Note:** we are only training a SIMS model due to time constrains.
###Code
while ( len(client.list_executions(
stateMachineArn=stateMachineArn,
statusFilter='RUNNING',
maxResults=2
)['executions']) > 0):
print ('State Machine is running...')
time.sleep(60)
###Output
_____no_output_____
###Markdown
Let's look at the succeeded executionOnce your step functions are done executing, you can list the executions and describe them
###Code
executions_response = client.list_executions(
stateMachineArn=stateMachineArn,
statusFilter='SUCCEEDED',
maxResults=2
)
print(json.dumps(executions_response, indent=4, sort_keys=True, default=str))
###Output
_____no_output_____
###Markdown
You can validate your Amazon Personalize deployment by navigating to the [Service Console](https://console.aws.amazon.com/personalize/home) and looking for the dataset group called **AP-ML-Ops-1** Let's look at the input that was delivered to the State Machine As we can see below, this is the input from our Parameters file we uploaded to S3. This input json was then passed to lambda functions in the state machine to utilize across Amazon Personalize APIs
###Code
describe_executions_response = client.describe_execution(
executionArn=executions_response['executions'][0]['executionArn']
)
print(json.dumps(json.loads(describe_executions_response['input']), indent=4, sort_keys=True, default=str))
###Output
_____no_output_____
###Markdown
Let's look at the time stamps As we can see below, this is the input from our Parameters file we uploaded to S3. This input json was then passed to lambda functions in the state machine to utilize across Amazon Personalize APIs
###Code
print("Start Date:")
print(json.dumps(describe_executions_response['startDate'], indent=4, sort_keys=True, default=str))
print("Stop Date:")
print(json.dumps(describe_executions_response['stopDate'], indent=4, sort_keys=True, default=str))
print("Elapsed Time: ")
elapsed_time = describe_executions_response['stopDate'] - describe_executions_response['startDate']
print(elapsed_time)
###Output
_____no_output_____
|
random.ipynb
|
###Markdown
GENERATING RANDOM NUMBERS:
###Code
import random
secret = random.randint(1,100)
win = False
for i in range(1,5):
guess = int(input("guess a number: "))
if guess == secret:
win = True
break
elif guess > secret:
print("your guess is too high")
else:
print("your guess is too low")
if win == True:
print("you win")
else:
print("you lose")
###Output
guess a number: 30
your guess is too low
guess a number: 5
your guess is too low
guess a number: 90
your guess is too high
guess a number: 50
your guess is too low
you lose
###Markdown
random Функции random `random.betavariate()` — используется для получения случайного числа с плавающей запятой от 0 до 1 на основе бета-распределения (применяется для статистических расчетов).
###Code
import random
random.betavariate(alpha=2, beta=5)
###Output
_____no_output_____
###Markdown
Примечание:[график результата вызова этой функции по 100к раз](https://interactivechaos.com/en/python/function/randombetavariate) `random.gauss()` — генерирует случайное число с плавающей запятой на основе распределения Гаусса (используется в теории вероятности).
###Code
random.gauss(mu=5,sigma=3)
###Output
_____no_output_____
###Markdown
Примечание:[график результата вызова этой функции по 100к раз](https://interactivechaos.com/en/python/function/randomgauss) `random.paretovariate()` — возвращает случайное число с плавающей запятой на основе распределения Парето (используется в теории вероятности).
###Code
random.paretovariate(alpha=10)
###Output
_____no_output_____
###Markdown
###Code
!apt-get update
!apt-get install -y xvfb python-opengl ffmpeg > /dev/null 2>&1
!pip install -U colabgymrender
import gym
from colabgymrender.recorder import Recorder
env = gym.make('MountainCarContinuous-v0')
directory = './video'
env._max_episode_steps = 50000
print(env._max_episode_steps)
env = Recorder(env, directory)
observation = env.reset()
t = 0
terminal = False
while not terminal:
t+=1
action = env.action_space.sample()
observation, reward, terminal, info = env.step(action)
print(t,observation, reward, terminal, info)
print(terminal)
print(t)
env.play(maxduration=50000)
###Output
_____no_output_____
###Markdown
Split data
###Code
import gc
gc.collect()
split = X[0]
emb_train = embeddings[split == 'train']
y_train = y[split == 'train']
emb_val = embeddings[split == 'dev']
y_val = y[split == 'dev']
len(emb_train)
len(emb_val)
unique_classes = np.intersect1d(y_train, y_val)
# process targets
#unique_classes = np.unique(y)
num_classes = len(unique_classes)
print(f"There are {num_classes} unique classes for family_id.")
subset_classes = set(unique_classes)
#Train
x_train_generator = (x for (x, y) in zip(emb_train, y_train) if y in subset_classes)
y_train_generator = (y for (x, y) in zip(emb_train, y_train) if y in subset_classes)
emb_train = list(x_train_generator)
y_train = list(y_train_generator)
len(emb_train), len(y_train)
#Eval
x_val_generator = (x for (x, y) in zip(emb_val, y_val) if y in subset_classes)
y_val_generator = (y for (x, y) in zip(emb_val, y_val) if y in subset_classes)
emb_val = list(x_val_generator)
y_val = list(y_val_generator)
len(y_val)
le = preprocessing.LabelEncoder()
labels = le.fit(unique_classes)
targets_train = le.transform(y_train)
targets_val = le.transform(y_val)
print(f"Targets: {targets_train.shape}, {targets_train}, {len(labels.classes_)} classes")
from torch.utils.data import Dataset, DataLoader
import torch
class MyDataset(Dataset):
def __init__(self, data, targets, transform=None):
self.data = data
self.targets = torch.LongTensor(targets)
self.transform = transform
def __getitem__(self, index):
x = self.data[index]
y = self.targets[index]
return x, y
def __len__(self):
return len(self.data)
train_dataset = MyDataset(data=emb_train, targets=targets_train)
val_dataset = MyDataset(data=emb_val, targets=targets_val)
train_dataloader = DataLoader(train_dataset, batch_size=2048)
test_dataloader = DataLoader(val_dataset, batch_size=2048)
next(iter(train_dataloader))[0].shape
next(iter(train_dataloader))[1].shape
import torch
import torch.nn.functional as F
from torch import nn
from deepchain.models.torch_model import TorchModel
from pytorch_lightning.metrics.functional import accuracy
class FamilyMLP(TorchModel):
"""Multi-layer perceptron model."""
def __init__(self, input_shape: int = 768, output_shape: int = 1, **kwargs):
super().__init__(**kwargs)
self.output = nn.Softmax if output_shape > 1 else nn.Sigmoid
self.loss = F.cross_entropy if output_shape > 1 else F.binary_cross_entropy
self._model = nn.Sequential(
nn.Linear(input_shape, 256),
nn.ReLU(),
nn.Dropout(0.1),
nn.Linear(256, 256),
nn.ReLU(),
nn.Dropout(0.1),
nn.Linear(256, output_shape)
)
def forward(self, x):
"""Defines forward pass"""
if not isinstance(x, torch.Tensor):
x = torch.tensor(x).float()
return self._model(x)
def training_step(self, batch, batch_idx):
"""training_step defined the train loop. It is independent of forward"""
x, y = batch
y_hat = self._model(x)
y = y.long()
#y = torch.unsqueeze(y, 1)
loss = self.loss(y_hat, y)
self.log("train_loss", loss, prog_bar=True)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self._model(x)
y = y.long()
loss = self.loss(y_hat, y)
preds = torch.max(y_hat, dim=1)[1]
acc = accuracy(preds, y)
# Calling self.log will surface up scalars for you in TensorBoard
self.log('val_loss', loss, prog_bar=True)
self.log('val_acc', acc, prog_bar=True)
return loss
def save_model(self, path: str):
"""Save entire model with torch"""
torch.save(self._model, path)
mlp = FamilyMLP(input_shape=1024, output_shape=num_classes)
X_train.shape[1]
mlp
mlp._model = torch.load("checkpoint/family_model.pt")
mlp.fit(train_dataloader, test_dataloader, epochs=10, auto_lr_find=True, auto_scale_batch_size=True, gpus=1)
mlp.save_model("family_model.pt")
!pwd
torch.max(mlp(next(iter(train_dataloader))[0]), 1)[1].shape
x, y = next(iter(train_dataloader))
torch.max(mlp(x), 1)[1] == y
###Output
_____no_output_____
###Markdown
Evaluation
###Code
from sklearn.metrics import accuracy_score, confusion_matrix, roc_auc_score
from torch.utils.data import DataLoader, TensorDataset
from typing import Callable, List, Tuple, Union
def model_evaluation_accuracy(
dataloader: DataLoader, model
) -> Tuple[np.array, np.array]:
"""
Make prediction for test data
Args:
dataloader: a torch dataloader containing dataset to be evaluated
model : a callable trained model with a predict method
"""
prediction, truth = [], []
for X, y in dataloader:
y_hat = torch.max(model.predict(X), 1)[1]
prediction += y_hat
truth += y.detach().numpy().flatten().tolist()
prediction, truth = np.array(prediction), np.array(truth)
acc_score = accuracy_score(truth, prediction)
print(f" Test : accuracy score : {acc_score:0.2f}")
return prediction, truth
prediction, truth = model_evaluation_accuracy(train_dataloader, mlp)
prediction, truth = model_evaluation_accuracy(test_dataloader, mlp)
###Output
Test : accuracy score : 0.81
###Markdown
Inference
###Code
le
import joblib
joblib.dump(le, 'label_encoder.joblib')
label_encoder = joblib.load('label_encoder.joblib')
label_encoder
def compute_scores(sequences: List[str]):
"""Return a list of all proteins score"""
#x_embedding = self.transformer.compute_embeddings(sequences)["mean"]
x_embedding = embeddings[:len(sequences)]
y_hat = mlp(torch.tensor(x_embedding))
preds = torch.max(y_hat, dim=1)[1]
preds = preds.detach().cpu().numpy()
family_preds = label_encoder.inverse_transform(preds)
family_list = [{"family_id": family_pred} for family_pred in family_preds]
return family_list
sequences = [
"MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG",
"KALTARQQEVFDLIRDHISQTGMPPTRAEIAQRLGFRSPNAAEEHLKALARKGVIEIVSGASRGIRLLQEE",
]
compute_scores(sequences)
# Start tensorboard.
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
###Output
_____no_output_____
###Markdown
Confirm that using `Random(None)` is equivalent to directly using `random`
###Code
import random
r = random.Random()
r.random()
from copy import deepcopy
r2 = deepcopy(random.Random(3))
text_spec = pipeline.get_text_spec(seed=SEED)
txt1 = pipeline.generate_text(messages, text_spec)
state2 = pipeline.get_state()
txt2 = pipeline.generate_text(messages, text_spec)
...
txtN = pipeline.generate_text(messages, text_spec)
# to re-generate text 2
with pipeline.temporary_state(state) as new_pipeline:
new_pipeline.generate_text(messages)
from contextlib import contextmanager
import re
from functools import partial, update_wrapper
class A:
def foo(self, a, b):
"""give me some stuff ;)
:param a: one number
:param b: another number
:return: third number
"""
return a + b
@contextmanager
def tmp(self, b):
self._foo = self.foo
new_foo = partial(self.foo, b=b)
# update_wrapper(new_foo, self.foo)
new_foo.__doc__ = re.sub(r"\:param b.*\n", "", self.foo.__doc__)
self.foo = new_foo
yield self
self.foo = self._foo
x = A()
print(x.foo(3, 3))
with x.tmp(3):
print(x.foo.__doc__)
# help(x.foo)
print(x.foo(5))
print(x.foo.__doc__)
# help(x.foo)
print(x.foo(3, 3))
from collections import defaultdict
partial?
x = defaultdict(int)
y = defaultdict(int, x)
y
x['a'] += 4
y
x
z = defaultdict(int, None)
from typing import get_type_hints
get_type_hints(defaultdict)
###Output
_____no_output_____
|
TestingJupyter.ipynb
|
###Markdown
The following code try to set a string variable variable_a and it can set from Excel with a value
###Code
global a
a= "In Jupyter"
def set_variable_a(s) :
global a
temp =a
a =s
return temp
a
@ribbon_function('get_not_so_random_number_with_max', 'Display Result', max_value='Ask Integer')
def get_not_so_random_number_with_max2(max_value):
import random
return random.random() * max_value
print ( get_not_so_random_number_with_max2(100))
print( get_ribbon_functions () )
print( get_ribbon_functions () )
print(sum(1,2,3))
c=returnArray(1,2,3)
c
###Output
_____no_output_____
|
courses/machine_learning/deepdive2/structured/solutions/4a_sample_babyweight.ipynb
|
###Markdown
LAB 4a: Creating a Sampled Dataset. Learning Objectives1. Setup up the environment.1. Sample the natality dataset to create train/eval/test sets.1. Preprocess the data in Pandas dataframe. Introduction In this notebook, we'll read data from BigQuery into our notebook to preprocess the data within a Pandas dataframe for a small, repeatable sample.We will set up the environment, sample the natality dataset to create train/eval/test splits, and preprocess the data in a Pandas dataframe. Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bash
pip freeze | grep google-cloud-bigquery==1.6.1 || \
pip install google-cloud-bigquery==1.6.1
###Output
google-cloud-bigquery==1.6.1
###Markdown
Import necessary libraries.
###Code
from google.cloud import bigquery
import pandas as pd
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
PROJECT = "cloud-training-demos" # Replace with your PROJECT
###Output
_____no_output_____
###Markdown
Create ML datasets by sampling using BigQueryWe'll begin by sampling the BigQuery data to create smaller datasets. Let's create a BigQuery client that we'll use throughout the lab.
###Code
bq = bigquery.Client(project = PROJECT)
###Output
_____no_output_____
###Markdown
We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash with in the modulo. Feel free to play around with these values to get the perfect combination.
###Code
modulo_divisor = 100
train_percent = 80.0
eval_percent = 10.0
train_buckets = int(modulo_divisor * train_percent / 100.0)
eval_buckets = int(modulo_divisor * eval_percent / 100.0)
###Output
_____no_output_____
###Markdown
We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows.
###Code
def display_dataframe_head_from_query(query, count=10):
"""Displays count rows from dataframe head from query.
Args:
query: str, query to be run on BigQuery, results stored in dataframe.
count: int, number of results from head of dataframe to display.
Returns:
Dataframe head with count number of results.
"""
df = bq.query(
query + " LIMIT {limit}".format(
limit=count)).to_dataframe()
return df.head(count)
###Output
_____no_output_____
###Markdown
For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results.
###Code
# Get label, features, and columns to hash and split into buckets
hash_cols_fixed_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
year,
month,
CASE
WHEN day IS NULL THEN
CASE
WHEN wday IS NULL THEN 0
ELSE wday
END
ELSE day
END AS date,
IFNULL(state, "Unknown") AS state,
IFNULL(mother_birth_state, "Unknown") AS mother_birth_state
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
"""
display_dataframe_head_from_query(hash_cols_fixed_query)
###Output
_____no_output_____
###Markdown
Using `COALESCE` would provide the same result as the nested `CASE WHEN`. This is preferable when all we want is the first non-null instance. To be precise the `CASE WHEN` would become `COALESCE(wday, day, 0) AS date`. You can read more about it [here](https://cloud.google.com/bigquery/docs/reference/standard-sql/conditional_expressions). Next query will combine our hash columns and will leave us just with our label, features, and our hash values.
###Code
data_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING),
CAST(date AS STRING),
CAST(state AS STRING),
CAST(mother_birth_state AS STRING)
)
) AS hash_values
FROM
({CTE_hash_cols_fixed})
""".format(CTE_hash_cols_fixed=hash_cols_fixed_query)
display_dataframe_head_from_query(data_query)
###Output
_____no_output_____
###Markdown
The next query is going to find the counts of each of the unique 657484 `hash_values`. This will be our first step at making actual hash buckets for our split via the `GROUP BY`.
###Code
# Get the counts of each of the unique hashs of our splitting column
first_bucketing_query = """
SELECT
hash_values,
COUNT(*) AS num_records
FROM
({CTE_data})
GROUP BY
hash_values
""".format(CTE_data=data_query)
display_dataframe_head_from_query(first_bucketing_query)
###Output
_____no_output_____
###Markdown
The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records.
###Code
# Get the number of records in each of the hash buckets
second_bucketing_query = """
SELECT
ABS(MOD(hash_values, {modulo_divisor})) AS bucket_index,
SUM(num_records) AS num_records
FROM
({CTE_first_bucketing})
GROUP BY
ABS(MOD(hash_values, {modulo_divisor}))
""".format(
CTE_first_bucketing=first_bucketing_query, modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(second_bucketing_query)
###Output
_____no_output_____
###Markdown
The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query.
###Code
# Calculate the overall percentages
percentages_query = """
SELECT
bucket_index,
num_records,
CAST(num_records AS FLOAT64) / (
SELECT
SUM(num_records)
FROM
({CTE_second_bucketing})) AS percent_records
FROM
({CTE_second_bucketing})
""".format(CTE_second_bucketing=second_bucketing_query)
display_dataframe_head_from_query(percentages_query)
###Output
_____no_output_____
###Markdown
We'll now select the range of buckets to be used in training.
###Code
# Choose hash buckets for training and pull in their statistics
train_query = """
SELECT
*,
"train" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= 0
AND bucket_index < {train_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets)
display_dataframe_head_from_query(train_query)
###Output
_____no_output_____
###Markdown
We'll do the same by selecting the range of buckets to be used evaluation.
###Code
# Choose hash buckets for validation and pull in their statistics
eval_query = """
SELECT
*,
"eval" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {train_buckets}
AND bucket_index < {cum_eval_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets,
cum_eval_buckets=train_buckets + eval_buckets)
display_dataframe_head_from_query(eval_query)
###Output
_____no_output_____
###Markdown
Lastly, we'll select the hash buckets to be used for the test split.
###Code
# Choose hash buckets for testing and pull in their statistics
test_query = """
SELECT
*,
"test" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {cum_eval_buckets}
AND bucket_index < {modulo_divisor}
""".format(
CTE_percentages=percentages_query,
cum_eval_buckets=train_buckets + eval_buckets,
modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(test_query)
###Output
_____no_output_____
###Markdown
In the below query, we'll `UNION ALL` all of the datasets together so that all three sets of hash buckets will be within one table. We added `dataset_id` so that we can sort on it in the query after.
###Code
# Union the training, validation, and testing dataset statistics
union_query = """
SELECT
0 AS dataset_id,
*
FROM
({CTE_train})
UNION ALL
SELECT
1 AS dataset_id,
*
FROM
({CTE_eval})
UNION ALL
SELECT
2 AS dataset_id,
*
FROM
({CTE_test})
""".format(CTE_train=train_query, CTE_eval=eval_query, CTE_test=test_query)
display_dataframe_head_from_query(union_query)
###Output
_____no_output_____
###Markdown
Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to the 80/10/10 that we were hoping to get.
###Code
# Show final splitting and associated statistics
split_query = """
SELECT
dataset_id,
dataset_name,
SUM(num_records) AS num_records,
SUM(percent_records) AS percent_records
FROM
({CTE_union})
GROUP BY
dataset_id,
dataset_name
ORDER BY
dataset_id
""".format(CTE_union=union_query)
display_dataframe_head_from_query(split_query)
###Output
_____no_output_____
###Markdown
Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train/eval/test sets do not overlap and takes a subsample of our global splits.
###Code
# every_n allows us to subsample from each of the hash values
# This helps us get approximately the record counts we want
every_n = 1000
splitting_string = "ABS(MOD(hash_values, {0} * {1}))".format(every_n, modulo_divisor)
def create_data_split_sample_df(query_string, splitting_string, lo, up):
"""Creates a dataframe with a sample of a data split.
Args:
query_string: str, query to run to generate splits.
splitting_string: str, modulo string to split by.
lo: float, lower bound for bucket filtering for split.
up: float, upper bound for bucket filtering for split.
Returns:
Dataframe containing data split sample.
"""
query = "SELECT * FROM ({0}) WHERE {1} >= {2} and {1} < {3}".format(
query_string, splitting_string, int(lo), int(up))
df = bq.query(query).to_dataframe()
return df
train_df = create_data_split_sample_df(
data_query, splitting_string,
lo=0, up=train_percent)
eval_df = create_data_split_sample_df(
data_query, splitting_string,
lo=train_percent, up=train_percent + eval_percent)
test_df = create_data_split_sample_df(
data_query, splitting_string,
lo=train_percent + eval_percent, up=modulo_divisor)
print("There are {} examples in the train dataset.".format(len(train_df)))
print("There are {} examples in the validation dataset.".format(len(eval_df)))
print("There are {} examples in the test dataset.".format(len(test_df)))
###Output
There are 7733 examples in the train dataset.
There are 1037 examples in the validation dataset.
There are 561 examples in the test dataset.
###Markdown
Preprocess data using PandasWe'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the `is_male` field be `Unknown`. Also, if there is more than child we'll change the `plurality` to `Multiple(2+)`. While we're at it, we'll also change the plurality column to be a string. We'll perform these operations below. Let's start by examining the training dataset as is.
###Code
train_df.head()
###Output
_____no_output_____
###Markdown
Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
###Code
train_df.describe()
###Output
_____no_output_____
###Markdown
It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a `preprocess` function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.
###Code
def preprocess(df):
""" Preprocess pandas dataframe for augmented babyweight data.
Args:
df: Dataframe containing raw babyweight data.
Returns:
Pandas dataframe containing preprocessed raw babyweight data as well
as simulated no ultrasound data masking some of the original data.
"""
# Clean up raw data
# Filter out what we don"t want to use for training
df = df[df.weight_pounds > 0]
df = df[df.mother_age > 0]
df = df[df.gestation_weeks > 0]
df = df[df.plurality > 0]
# Modify plurality field to be a string
twins_etc = dict(zip([1,2,3,4,5],
["Single(1)",
"Twins(2)",
"Triplets(3)",
"Quadruplets(4)",
"Quintuplets(5)"]))
df["plurality"].replace(twins_etc, inplace=True)
# Clone data and mask certain columns to simulate lack of ultrasound
no_ultrasound = df.copy(deep=True)
# Modify is_male
no_ultrasound["is_male"] = "Unknown"
# Modify plurality
condition = no_ultrasound["plurality"] != "Single(1)"
no_ultrasound.loc[condition, "plurality"] = "Multiple(2+)"
# Concatenate both datasets together and shuffle
return pd.concat(
[df, no_ultrasound]).sample(frac=1).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Let's process the train/eval/test set and see a small sample of the training data after our preprocessing:
###Code
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
test_df = preprocess(test_df)
train_df.head()
train_df.tail()
###Output
_____no_output_____
###Markdown
Let's look again at a summary of the dataset. Note that we only see numeric columns, so `plurality` does not show up.
###Code
train_df.describe()
###Output
_____no_output_____
###Markdown
Write to .csv files In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.
###Code
# Define columns
columns = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
# Write out CSV files
train_df.to_csv(
path_or_buf="train.csv", columns=columns, header=False, index=False)
eval_df.to_csv(
path_or_buf="eval.csv", columns=columns, header=False, index=False)
test_df.to_csv(
path_or_buf="test.csv", columns=columns, header=False, index=False)
%%bash
wc -l *.csv
%%bash
head *.csv
%%bash
tail *.csv
###Output
==> eval.csv <==
7.43839671988,False,25,Single(1),37
7.06140625186,True,34,Single(1),41
7.43619209726,True,36,Single(1),40
3.56267015392,True,35,Twins(2),31
8.811876612139999,False,27,Single(1),36
8.0689187892,Unknown,36,Single(1),40
8.7633749145,Unknown,34,Single(1),39
7.43839671988,True,43,Single(1),40
4.62529825676,Unknown,38,Multiple(2+),35
6.1839664491,Unknown,20,Single(1),38
==> test.csv <==
6.37576861704,Unknown,21,Single(1),39
7.5618555866,True,22,Single(1),39
8.99926953484,Unknown,28,Single(1),42
7.82420567838,Unknown,24,Single(1),39
9.25059651352,True,26,Single(1),40
8.62448368944,Unknown,28,Single(1),39
5.2580249487,False,18,Single(1),38
7.87491199864,True,25,Single(1),37
5.81138522632,Unknown,41,Single(1),36
6.93794738514,True,24,Single(1),40
==> train.csv <==
7.81318256528,True,18,Single(1),43
7.31273323054,False,35,Single(1),34
6.75055446244,Unknown,37,Single(1),39
7.43839671988,True,32,Single(1),39
6.9666074791999995,True,20,Single(1),38
7.25100379718,True,32,Single(1),39
8.811876612139999,True,30,Single(1),39
7.24879917456,True,26,Single(1),40
7.62578964258,Unknown,22,Single(1),40
6.4992274837599995,Unknown,22,Single(1),38
###Markdown
LAB 4a: Creating a Sampled Dataset.**Learning Objectives**1. Setup up the environment1. Sample the natality dataset to create train/eval/test sets1. Preprocess the data in Pandas dataframe Introduction In this notebook, we'll read data from BigQuery into our notebook to preprocess the data within a Pandas dataframe for a small, repeatable sample.We will set up the environment, sample the natality dataset to create train/eval/test splits, and preprocess the data in a Pandas dataframe. Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
%%bash
sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo pip install google-cloud-bigquery==1.6.1
###Output
google-cloud-bigquery==1.6.1
###Markdown
Import necessary libraries.
###Code
from google.cloud import bigquery
import pandas as pd
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
PROJECT = "cloud-training-demos" # Replace with your PROJECT
###Output
_____no_output_____
###Markdown
Create ML datasets by sampling using BigQueryWe'll begin by sampling the BigQuery data to create smaller datasets. Let's create a BigQuery client that we'll use throughout the lab.
###Code
bq = bigquery.Client(project = PROJECT)
###Output
_____no_output_____
###Markdown
We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash with in the modulo. Feel free to play around with these values to get the perfect combination.
###Code
modulo_divisor = 100
train_percent = 80.0
eval_percent = 10.0
train_buckets = int(modulo_divisor * train_percent / 100.0)
eval_buckets = int(modulo_divisor * eval_percent / 100.0)
###Output
_____no_output_____
###Markdown
We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows.
###Code
def display_dataframe_head_from_query(query, count=10):
"""Displays count rows from dataframe head from query.
Args:
query: str, query to be run on BigQuery, results stored in dataframe.
count: int, number of results from head of dataframe to display.
Returns:
Dataframe head with count number of results.
"""
df = bq.query(
query + " LIMIT {limit}".format(
limit=count)).to_dataframe()
return df.head(count)
###Output
_____no_output_____
###Markdown
For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results.
###Code
# Get label, features, and columns to hash and split into buckets
hash_cols_fixed_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
year,
month,
CASE
WHEN day IS NULL THEN
CASE
WHEN wday IS NULL THEN 0
ELSE wday
END
ELSE day
END AS date,
IFNULL(state, "Unknown") AS state,
IFNULL(mother_birth_state, "Unknown") AS mother_birth_state
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
"""
display_dataframe_head_from_query(hash_cols_fixed_query)
###Output
_____no_output_____
###Markdown
Using `COALESCE` would provide the same result as the nested `CASE WHEN`. This is preferable when all we want is the first non-null instance. To be precise the `CASE WHEN` would become `COALESCE(wday, day, 0) AS date`. You can read more about it [here](https://cloud.google.com/bigquery/docs/reference/standard-sql/conditional_expressions). Next query will combine our hash columns and will leave us just with our label, features, and our hash values.
###Code
data_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING),
CAST(date AS STRING),
CAST(state AS STRING),
CAST(mother_birth_state AS STRING)
)
) AS hash_values
FROM
({CTE_hash_cols_fixed})
""".format(CTE_hash_cols_fixed=hash_cols_fixed_query)
display_dataframe_head_from_query(data_query)
###Output
_____no_output_____
###Markdown
The next query is going to find the counts of each of the unique 657484 `hash_values`. This will be our first step at making actual hash buckets for our split via the `GROUP BY`.
###Code
# Get the counts of each of the unique hashs of our splitting column
first_bucketing_query = """
SELECT
hash_values,
COUNT(*) AS num_records
FROM
({CTE_data})
GROUP BY
hash_values
""".format(CTE_data=data_query)
display_dataframe_head_from_query(first_bucketing_query)
###Output
_____no_output_____
###Markdown
The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records.
###Code
# Get the number of records in each of the hash buckets
second_bucketing_query = """
SELECT
ABS(MOD(hash_values, {modulo_divisor})) AS bucket_index,
SUM(num_records) AS num_records
FROM
({CTE_first_bucketing})
GROUP BY
ABS(MOD(hash_values, {modulo_divisor}))
""".format(
CTE_first_bucketing=first_bucketing_query, modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(second_bucketing_query)
###Output
_____no_output_____
###Markdown
The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query.
###Code
# Calculate the overall percentages
percentages_query = """
SELECT
bucket_index,
num_records,
CAST(num_records AS FLOAT64) / (
SELECT
SUM(num_records)
FROM
({CTE_second_bucketing})) AS percent_records
FROM
({CTE_second_bucketing})
""".format(CTE_second_bucketing=second_bucketing_query)
display_dataframe_head_from_query(percentages_query)
###Output
_____no_output_____
###Markdown
We'll now select the range of buckets to be used in training.
###Code
# Choose hash buckets for training and pull in their statistics
train_query = """
SELECT
*,
"train" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= 0
AND bucket_index < {train_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets)
display_dataframe_head_from_query(train_query)
###Output
_____no_output_____
###Markdown
We'll do the same by selecting the range of buckets to be used evaluation.
###Code
# Choose hash buckets for validation and pull in their statistics
eval_query = """
SELECT
*,
"eval" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {train_buckets}
AND bucket_index < {cum_eval_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets,
cum_eval_buckets=train_buckets + eval_buckets)
display_dataframe_head_from_query(eval_query)
###Output
_____no_output_____
###Markdown
Lastly, we'll select the hash buckets to be used for the test split.
###Code
# Choose hash buckets for testing and pull in their statistics
test_query = """
SELECT
*,
"test" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {cum_eval_buckets}
AND bucket_index < {modulo_divisor}
""".format(
CTE_percentages=percentages_query,
cum_eval_buckets=train_buckets + eval_buckets,
modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(test_query)
###Output
_____no_output_____
###Markdown
In the below query, we'll `UNION ALL` all of the datasets together so that all three sets of hash buckets will be within one table. We added `dataset_id` so that we can sort on it in the query after.
###Code
# Union the training, validation, and testing dataset statistics
union_query = """
SELECT
0 AS dataset_id,
*
FROM
({CTE_train})
UNION ALL
SELECT
1 AS dataset_id,
*
FROM
({CTE_eval})
UNION ALL
SELECT
2 AS dataset_id,
*
FROM
({CTE_test})
""".format(CTE_train=train_query, CTE_eval=eval_query, CTE_test=test_query)
display_dataframe_head_from_query(union_query)
###Output
_____no_output_____
###Markdown
Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to the 80/10/10 that we were hoping to get.
###Code
# Show final splitting and associated statistics
split_query = """
SELECT
dataset_id,
dataset_name,
SUM(num_records) AS num_records,
SUM(percent_records) AS percent_records
FROM
({CTE_union})
GROUP BY
dataset_id,
dataset_name
ORDER BY
dataset_id
""".format(CTE_union=union_query)
display_dataframe_head_from_query(split_query)
###Output
_____no_output_____
###Markdown
Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train/eval/test sets do not overlap and takes a subsample of our global splits.
###Code
# every_n allows us to subsample from each of the hash values
# This helps us get approximately the record counts we want
every_n = 1000
splitting_string = "ABS(MOD(hash_values, {0} * {1}))".format(every_n, modulo_divisor)
def create_data_split_sample_df(query_string, splitting_string, lo, up):
"""Creates a dataframe with a sample of a data split.
Args:
query_string: str, query to run to generate splits.
splitting_string: str, modulo string to split by.
lo: float, lower bound for bucket filtering for split.
up: float, upper bound for bucket filtering for split.
Returns:
Dataframe containing data split sample.
"""
query = "SELECT * FROM ({0}) WHERE {1} >= {2} and {1} < {3}".format(
query_string, splitting_string, int(lo), int(up))
df = bq.query(query).to_dataframe()
return df
train_df = create_data_split_sample_df(
data_query, splitting_string,
lo=0, up=train_percent)
eval_df = create_data_split_sample_df(
data_query, splitting_string,
lo=train_percent, up=train_percent + eval_percent)
test_df = create_data_split_sample_df(
data_query, splitting_string,
lo=train_percent + eval_percent, up=modulo_divisor)
print("There are {} examples in the train dataset.".format(len(train_df)))
print("There are {} examples in the validation dataset.".format(len(eval_df)))
print("There are {} examples in the test dataset.".format(len(test_df)))
###Output
There are 7733 examples in the train dataset.
There are 1037 examples in the validation dataset.
There are 561 examples in the test dataset.
###Markdown
Preprocess data using PandasWe'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the `is_male` field be `Unknown`. Also, if there is more than child we'll change the `plurality` to `Multiple(2+)`. While we're at it, we'll also change the plurality column to be a string. We'll perform these operations below. Let's start by examining the training dataset as is.
###Code
train_df.head()
###Output
_____no_output_____
###Markdown
Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
###Code
train_df.describe()
###Output
_____no_output_____
###Markdown
It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a `preprocess` function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.
###Code
def preprocess(df):
""" Preprocess pandas dataframe for augmented babyweight data.
Args:
df: Dataframe containing raw babyweight data.
Returns:
Pandas dataframe containing preprocessed raw babyweight data as well
as simulated no ultrasound data masking some of the original data.
"""
# Clean up raw data
# Filter out what we don"t want to use for training
df = df[df.weight_pounds > 0]
df = df[df.mother_age > 0]
df = df[df.gestation_weeks > 0]
df = df[df.plurality > 0]
# Modify plurality field to be a string
twins_etc = dict(zip([1,2,3,4,5],
["Single(1)",
"Twins(2)",
"Triplets(3)",
"Quadruplets(4)",
"Quintuplets(5)"]))
df["plurality"].replace(twins_etc, inplace=True)
# Clone data and mask certain columns to simulate lack of ultrasound
no_ultrasound = df.copy(deep=True)
# Modify is_male
no_ultrasound["is_male"] = "Unknown"
# Modify plurality
condition = no_ultrasound["plurality"] != "Single(1)"
no_ultrasound.loc[condition, "plurality"] = "Multiple(2+)"
# Concatenate both datasets together and shuffle
return pd.concat(
[df, no_ultrasound]).sample(frac=1).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Let's process the train/eval/test set and see a small sample of the training data after our preprocessing:
###Code
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
test_df = preprocess(test_df)
train_df.head()
train_df.tail()
###Output
_____no_output_____
###Markdown
Let's look again at a summary of the dataset. Note that we only see numeric columns, so `plurality` does not show up.
###Code
train_df.describe()
###Output
_____no_output_____
###Markdown
Write to .csv files In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.
###Code
# Define columns
columns = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
# Write out CSV files
train_df.to_csv(
path_or_buf="train.csv", columns=columns, header=False, index=False)
eval_df.to_csv(
path_or_buf="eval.csv", columns=columns, header=False, index=False)
test_df.to_csv(
path_or_buf="test.csv", columns=columns, header=False, index=False)
%%bash
wc -l *.csv
%%bash
head *.csv
%%bash
tail *.csv
###Output
==> eval.csv <==
7.43839671988,False,25,Single(1),37
7.06140625186,True,34,Single(1),41
7.43619209726,True,36,Single(1),40
3.56267015392,True,35,Twins(2),31
8.811876612139999,False,27,Single(1),36
8.0689187892,Unknown,36,Single(1),40
8.7633749145,Unknown,34,Single(1),39
7.43839671988,True,43,Single(1),40
4.62529825676,Unknown,38,Multiple(2+),35
6.1839664491,Unknown,20,Single(1),38
==> test.csv <==
6.37576861704,Unknown,21,Single(1),39
7.5618555866,True,22,Single(1),39
8.99926953484,Unknown,28,Single(1),42
7.82420567838,Unknown,24,Single(1),39
9.25059651352,True,26,Single(1),40
8.62448368944,Unknown,28,Single(1),39
5.2580249487,False,18,Single(1),38
7.87491199864,True,25,Single(1),37
5.81138522632,Unknown,41,Single(1),36
6.93794738514,True,24,Single(1),40
==> train.csv <==
7.81318256528,True,18,Single(1),43
7.31273323054,False,35,Single(1),34
6.75055446244,Unknown,37,Single(1),39
7.43839671988,True,32,Single(1),39
6.9666074791999995,True,20,Single(1),38
7.25100379718,True,32,Single(1),39
8.811876612139999,True,30,Single(1),39
7.24879917456,True,26,Single(1),40
7.62578964258,Unknown,22,Single(1),40
6.4992274837599995,Unknown,22,Single(1),38
###Markdown
LAB 4a: Creating a Sampled Dataset.**Learning Objectives**1. Setup up the environment1. Sample the natality dataset to create train/eval/test sets1. Preprocess the data in Pandas dataframe Introduction In this notebook, we'll read data from BigQuery into our notebook to preprocess the data within a Pandas dataframe for a small, repeatable sample.We will set up the environment, sample the natality dataset to create train/eval/test splits, and preprocess the data in a Pandas dataframe. Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bash
sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo pip install google-cloud-bigquery==1.6.1
###Output
google-cloud-bigquery==1.6.1
###Markdown
Import necessary libraries.
###Code
from google.cloud import bigquery
import pandas as pd
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
PROJECT = "cloud-training-demos" # Replace with your PROJECT
###Output
_____no_output_____
###Markdown
Create ML datasets by sampling using BigQueryWe'll begin by sampling the BigQuery data to create smaller datasets. Let's create a BigQuery client that we'll use throughout the lab.
###Code
bq = bigquery.Client(project = PROJECT)
###Output
_____no_output_____
###Markdown
We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash with in the modulo. Feel free to play around with these values to get the perfect combination.
###Code
modulo_divisor = 100
train_percent = 80.0
eval_percent = 10.0
train_buckets = int(modulo_divisor * train_percent / 100.0)
eval_buckets = int(modulo_divisor * eval_percent / 100.0)
###Output
_____no_output_____
###Markdown
We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows.
###Code
def display_dataframe_head_from_query(query, count=10):
"""Displays count rows from dataframe head from query.
Args:
query: str, query to be run on BigQuery, results stored in dataframe.
count: int, number of results from head of dataframe to display.
Returns:
Dataframe head with count number of results.
"""
df = bq.query(
query + " LIMIT {limit}".format(
limit=count)).to_dataframe()
return df.head(count)
###Output
_____no_output_____
###Markdown
For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results.
###Code
# Get label, features, and columns to hash and split into buckets
hash_cols_fixed_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
year,
month,
CASE
WHEN day IS NULL THEN
CASE
WHEN wday IS NULL THEN 0
ELSE wday
END
ELSE day
END AS date,
IFNULL(state, "Unknown") AS state,
IFNULL(mother_birth_state, "Unknown") AS mother_birth_state
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
"""
display_dataframe_head_from_query(hash_cols_fixed_query)
###Output
_____no_output_____
###Markdown
Using `COALESCE` would provide the same result as the nested `CASE WHEN`. This is preferable when all we want is the first non-null instance. To be precise the `CASE WHEN` would become `COALESCE(wday, day, 0) AS date`. You can read more about it [here](https://cloud.google.com/bigquery/docs/reference/standard-sql/conditional_expressions). Next query will combine our hash columns and will leave us just with our label, features, and our hash values.
###Code
data_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING),
CAST(date AS STRING),
CAST(state AS STRING),
CAST(mother_birth_state AS STRING)
)
) AS hash_values
FROM
({CTE_hash_cols_fixed})
""".format(CTE_hash_cols_fixed=hash_cols_fixed_query)
display_dataframe_head_from_query(data_query)
###Output
_____no_output_____
###Markdown
The next query is going to find the counts of each of the unique 657484 `hash_values`. This will be our first step at making actual hash buckets for our split via the `GROUP BY`.
###Code
# Get the counts of each of the unique hashs of our splitting column
first_bucketing_query = """
SELECT
hash_values,
COUNT(*) AS num_records
FROM
({CTE_data})
GROUP BY
hash_values
""".format(CTE_data=data_query)
display_dataframe_head_from_query(first_bucketing_query)
###Output
_____no_output_____
###Markdown
The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records.
###Code
# Get the number of records in each of the hash buckets
second_bucketing_query = """
SELECT
ABS(MOD(hash_values, {modulo_divisor})) AS bucket_index,
SUM(num_records) AS num_records
FROM
({CTE_first_bucketing})
GROUP BY
ABS(MOD(hash_values, {modulo_divisor}))
""".format(
CTE_first_bucketing=first_bucketing_query, modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(second_bucketing_query)
###Output
_____no_output_____
###Markdown
The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query.
###Code
# Calculate the overall percentages
percentages_query = """
SELECT
bucket_index,
num_records,
CAST(num_records AS FLOAT64) / (
SELECT
SUM(num_records)
FROM
({CTE_second_bucketing})) AS percent_records
FROM
({CTE_second_bucketing})
""".format(CTE_second_bucketing=second_bucketing_query)
display_dataframe_head_from_query(percentages_query)
###Output
_____no_output_____
###Markdown
We'll now select the range of buckets to be used in training.
###Code
# Choose hash buckets for training and pull in their statistics
train_query = """
SELECT
*,
"train" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= 0
AND bucket_index < {train_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets)
display_dataframe_head_from_query(train_query)
###Output
_____no_output_____
###Markdown
We'll do the same by selecting the range of buckets to be used evaluation.
###Code
# Choose hash buckets for validation and pull in their statistics
eval_query = """
SELECT
*,
"eval" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {train_buckets}
AND bucket_index < {cum_eval_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets,
cum_eval_buckets=train_buckets + eval_buckets)
display_dataframe_head_from_query(eval_query)
###Output
_____no_output_____
###Markdown
Lastly, we'll select the hash buckets to be used for the test split.
###Code
# Choose hash buckets for testing and pull in their statistics
test_query = """
SELECT
*,
"test" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {cum_eval_buckets}
AND bucket_index < {modulo_divisor}
""".format(
CTE_percentages=percentages_query,
cum_eval_buckets=train_buckets + eval_buckets,
modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(test_query)
###Output
_____no_output_____
###Markdown
LAB 4a: Creating a Sampled Dataset.**Learning Objectives**1. Setup up the environment1. Sample the natality dataset to create train/eval/test sets1. Preprocess the data in Pandas dataframe Introduction In this notebook, we'll read data from BigQuery into our notebook to preprocess the data within a Pandas dataframe for a small, repeatable sample.We will set up the environment, sample the natality dataset to create train/eval/test splits, and preprocess the data in a Pandas dataframe. Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
%%bash
python3 -m pip freeze | grep google-cloud-bigquery==1.6.1 || \
python3 -m pip install google-cloud-bigquery==1.6.1
###Output
google-cloud-bigquery==1.6.1
###Markdown
Import necessary libraries.
###Code
from google.cloud import bigquery
import pandas as pd
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
PROJECT = "cloud-training-demos" # Replace with your PROJECT
###Output
_____no_output_____
###Markdown
Create ML datasets by sampling using BigQueryWe'll begin by sampling the BigQuery data to create smaller datasets. Let's create a BigQuery client that we'll use throughout the lab.
###Code
bq = bigquery.Client(project = PROJECT)
###Output
_____no_output_____
###Markdown
We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash with in the modulo. Feel free to play around with these values to get the perfect combination.
###Code
modulo_divisor = 100
train_percent = 80.0
eval_percent = 10.0
train_buckets = int(modulo_divisor * train_percent / 100.0)
eval_buckets = int(modulo_divisor * eval_percent / 100.0)
###Output
_____no_output_____
###Markdown
We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows.
###Code
def display_dataframe_head_from_query(query, count=10):
"""Displays count rows from dataframe head from query.
Args:
query: str, query to be run on BigQuery, results stored in dataframe.
count: int, number of results from head of dataframe to display.
Returns:
Dataframe head with count number of results.
"""
df = bq.query(
query + " LIMIT {limit}".format(
limit=count)).to_dataframe()
return df.head(count)
###Output
_____no_output_____
###Markdown
For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results.
###Code
# Get label, features, and columns to hash and split into buckets
hash_cols_fixed_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
year,
month,
CASE
WHEN day IS NULL THEN
CASE
WHEN wday IS NULL THEN 0
ELSE wday
END
ELSE day
END AS date,
IFNULL(state, "Unknown") AS state,
IFNULL(mother_birth_state, "Unknown") AS mother_birth_state
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
"""
display_dataframe_head_from_query(hash_cols_fixed_query)
###Output
_____no_output_____
###Markdown
Using `COALESCE` would provide the same result as the nested `CASE WHEN`. This is preferable when all we want is the first non-null instance. To be precise the `CASE WHEN` would become `COALESCE(wday, day, 0) AS date`. You can read more about it [here](https://cloud.google.com/bigquery/docs/reference/standard-sql/conditional_expressions). Next query will combine our hash columns and will leave us just with our label, features, and our hash values.
###Code
data_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ABS(
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING),
CAST(date AS STRING),
CAST(state AS STRING),
CAST(mother_birth_state AS STRING)
)
)
) AS hash_values
FROM
({CTE_hash_cols_fixed})
""".format(CTE_hash_cols_fixed=hash_cols_fixed_query)
display_dataframe_head_from_query(data_query)
###Output
_____no_output_____
###Markdown
The next query is going to find the counts of each of the unique 657484 `hash_values`. This will be our first step at making actual hash buckets for our split via the `GROUP BY`.
###Code
# Get the counts of each of the unique hashs of our splitting column
first_bucketing_query = """
SELECT
hash_values,
COUNT(*) AS num_records
FROM
({CTE_data})
GROUP BY
hash_values
""".format(CTE_data=data_query)
display_dataframe_head_from_query(first_bucketing_query)
###Output
_____no_output_____
###Markdown
The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records.
###Code
# Get the number of records in each of the hash buckets
second_bucketing_query = """
SELECT
MOD(hash_values, {modulo_divisor}) AS bucket_index,
SUM(num_records) AS num_records
FROM
({CTE_first_bucketing})
GROUP BY
MOD(hash_values, {modulo_divisor})
""".format(
CTE_first_bucketing=first_bucketing_query, modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(second_bucketing_query)
###Output
_____no_output_____
###Markdown
The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query.
###Code
# Calculate the overall percentages
percentages_query = """
SELECT
bucket_index,
num_records,
CAST(num_records AS FLOAT64) / (
SELECT
SUM(num_records)
FROM
({CTE_second_bucketing})) AS percent_records
FROM
({CTE_second_bucketing})
""".format(CTE_second_bucketing=second_bucketing_query)
display_dataframe_head_from_query(percentages_query)
###Output
_____no_output_____
###Markdown
We'll now select the range of buckets to be used in training.
###Code
# Choose hash buckets for training and pull in their statistics
train_query = """
SELECT
*,
"train" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= 0
AND bucket_index < {train_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets)
display_dataframe_head_from_query(train_query)
###Output
_____no_output_____
###Markdown
We'll do the same by selecting the range of buckets to be used evaluation.
###Code
# Choose hash buckets for validation and pull in their statistics
eval_query = """
SELECT
*,
"eval" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {train_buckets}
AND bucket_index < {cum_eval_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets,
cum_eval_buckets=train_buckets + eval_buckets)
display_dataframe_head_from_query(eval_query)
###Output
_____no_output_____
###Markdown
Lastly, we'll select the hash buckets to be used for the test split.
###Code
# Choose hash buckets for testing and pull in their statistics
test_query = """
SELECT
*,
"test" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {cum_eval_buckets}
AND bucket_index < {modulo_divisor}
""".format(
CTE_percentages=percentages_query,
cum_eval_buckets=train_buckets + eval_buckets,
modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(test_query)
###Output
_____no_output_____
###Markdown
In the below query, we'll `UNION ALL` all of the datasets together so that all three sets of hash buckets will be within one table. We added `dataset_id` so that we can sort on it in the query after.
###Code
# Union the training, validation, and testing dataset statistics
union_query = """
SELECT
0 AS dataset_id,
*
FROM
({CTE_train})
UNION ALL
SELECT
1 AS dataset_id,
*
FROM
({CTE_eval})
UNION ALL
SELECT
2 AS dataset_id,
*
FROM
({CTE_test})
""".format(CTE_train=train_query, CTE_eval=eval_query, CTE_test=test_query)
display_dataframe_head_from_query(union_query)
###Output
_____no_output_____
###Markdown
Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to the 80/10/10 that we were hoping to get.
###Code
# Show final splitting and associated statistics
split_query = """
SELECT
dataset_id,
dataset_name,
SUM(num_records) AS num_records,
SUM(percent_records) AS percent_records
FROM
({CTE_union})
GROUP BY
dataset_id,
dataset_name
ORDER BY
dataset_id
""".format(CTE_union=union_query)
display_dataframe_head_from_query(split_query)
###Output
_____no_output_____
###Markdown
Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train/eval/test sets do not overlap and takes a subsample of our global splits.
###Code
# every_n allows us to subsample from each of the hash values
# This helps us get approximately the record counts we want
every_n = 1000
splitting_string = "MOD(hash_values, {0} * {1})".format(every_n, modulo_divisor)
def create_data_split_sample_df(query_string, splitting_string, lo, up):
"""Creates a dataframe with a sample of a data split.
Args:
query_string: str, query to run to generate splits.
splitting_string: str, modulo string to split by.
lo: float, lower bound for bucket filtering for split.
up: float, upper bound for bucket filtering for split.
Returns:
Dataframe containing data split sample.
"""
query = "SELECT * FROM ({0}) WHERE {1} >= {2} and {1} < {3}".format(
query_string, splitting_string, int(lo), int(up))
df = bq.query(query).to_dataframe()
return df
train_df = create_data_split_sample_df(
query_string, splitting_string,
lo=0, up=train_percent)
eval_df = create_data_split_sample_df(
query_string, splitting_string,
lo=train_percent, up=train_percent + eval_percent)
test_df = create_data_split_sample_df(
query_string, splitting_string,
lo=train_percent + eval_percent, up=modulo_divisor)
print("There are {} examples in the train dataset.".format(len(train_df)))
print("There are {} examples in the validation dataset.".format(len(eval_df)))
print("There are {} examples in the test dataset.".format(len(test_df)))
###Output
There are 7733 examples in the train dataset.
There are 1037 examples in the validation dataset.
There are 561 examples in the test dataset.
###Markdown
Preprocess data using PandasWe'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the `is_male` field be `Unknown`. Also, if there is more than child we'll change the `plurality` to `Multiple(2+)`. While we're at it, we'll also change the plurality column to be a string. We'll perform these operations below. Let's start by examining the training dataset as is.
###Code
train_df.head()
###Output
_____no_output_____
###Markdown
Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
###Code
train_df.describe()
###Output
_____no_output_____
###Markdown
It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a `preprocess` function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.
###Code
def preprocess(df):
""" Preprocess pandas dataframe for augmented babyweight data.
Args:
df: Dataframe containing raw babyweight data.
Returns:
Pandas dataframe containing preprocessed raw babyweight data as well
as simulated no ultrasound data masking some of the original data.
"""
# Clean up raw data
# Filter out what we don"t want to use for training
df = df[df.weight_pounds > 0]
df = df[df.mother_age > 0]
df = df[df.gestation_weeks > 0]
df = df[df.plurality > 0]
# Modify plurality field to be a string
twins_etc = dict(zip([1,2,3,4,5],
["Single(1)",
"Twins(2)",
"Triplets(3)",
"Quadruplets(4)",
"Quintuplets(5)"]))
df["plurality"].replace(twins_etc, inplace=True)
# Clone data and mask certain columns to simulate lack of ultrasound
no_ultrasound = df.copy(deep=True)
# Modify is_male
no_ultrasound["is_male"] = "Unknown"
# Modify plurality
condition = no_ultrasound["plurality"] != "Single(1)"
no_ultrasound.loc[condition, "plurality"] = "Multiple(2+)"
# Concatenate both datasets together and shuffle
return pd.concat(
[df, no_ultrasound]).sample(frac=1).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Let's process the train/eval/test set and see a small sample of the training data after our preprocessing:
###Code
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
test_df = preprocess(test_df)
train_df.head()
train_df.tail()
###Output
_____no_output_____
###Markdown
Let's look again at a summary of the dataset. Note that we only see numeric columns, so `plurality` does not show up.
###Code
train_df.describe()
###Output
_____no_output_____
###Markdown
Write to .csv files In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.
###Code
# Define columns
columns = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
# Write out CSV files
train_df.to_csv(
path_or_buf="train.csv", columns=columns, header=False, index=False)
eval_df.to_csv(
path_or_buf="eval.csv", columns=columns, header=False, index=False)
test_df.to_csv(
path_or_buf="test.csv", columns=columns, header=False, index=False)
%%bash
wc -l *.csv
%%bash
head *.csv
%%bash
tail *.csv
###Output
==> eval.csv <==
7.43839671988,False,25,Single(1),37
7.06140625186,True,34,Single(1),41
7.43619209726,True,36,Single(1),40
3.56267015392,True,35,Twins(2),31
8.811876612139999,False,27,Single(1),36
8.0689187892,Unknown,36,Single(1),40
8.7633749145,Unknown,34,Single(1),39
7.43839671988,True,43,Single(1),40
4.62529825676,Unknown,38,Multiple(2+),35
6.1839664491,Unknown,20,Single(1),38
==> test.csv <==
6.37576861704,Unknown,21,Single(1),39
7.5618555866,True,22,Single(1),39
8.99926953484,Unknown,28,Single(1),42
7.82420567838,Unknown,24,Single(1),39
9.25059651352,True,26,Single(1),40
8.62448368944,Unknown,28,Single(1),39
5.2580249487,False,18,Single(1),38
7.87491199864,True,25,Single(1),37
5.81138522632,Unknown,41,Single(1),36
6.93794738514,True,24,Single(1),40
==> train.csv <==
7.81318256528,True,18,Single(1),43
7.31273323054,False,35,Single(1),34
6.75055446244,Unknown,37,Single(1),39
7.43839671988,True,32,Single(1),39
6.9666074791999995,True,20,Single(1),38
7.25100379718,True,32,Single(1),39
8.811876612139999,True,30,Single(1),39
7.24879917456,True,26,Single(1),40
7.62578964258,Unknown,22,Single(1),40
6.4992274837599995,Unknown,22,Single(1),38
###Markdown
LAB 4a: Creating a Sampled Dataset.**Learning Objectives**1. Setup up the environment1. Sample the natality dataset to create train/eval/test sets1. Preprocess the data in Pandas dataframe Introduction In this notebook, we'll read data from BigQuery into our notebook to preprocess the data within a Pandas dataframe for a small, repeatable sample.We will set up the environment, sample the natality dataset to create train/eval/test splits, and preprocess the data in a Pandas dataframe. Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
%%bash
python3 -m pip freeze | grep google-cloud-bigquery==1.6.1 || \
python3 -m pip install google-cloud-bigquery==1.6.1
###Output
google-cloud-bigquery==1.6.1
###Markdown
Import necessary libraries.
###Code
from google.cloud import bigquery
import pandas as pd
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
PROJECT = "cloud-training-demos" # Replace with your PROJECT
###Output
_____no_output_____
###Markdown
Create ML datasets by sampling using BigQueryWe'll begin by sampling the BigQuery data to create smaller datasets. Let's create a BigQuery client that we'll use throughout the lab.
###Code
bq = bigquery.Client(project = PROJECT)
###Output
_____no_output_____
###Markdown
We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash with in the modulo. Feel free to play around with these values to get the perfect combination.
###Code
modulo_divisor = 100
train_percent = 80.0
eval_percent = 10.0
train_buckets = int(modulo_divisor * train_percent / 100.0)
eval_buckets = int(modulo_divisor * eval_percent / 100.0)
###Output
_____no_output_____
###Markdown
We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows.
###Code
def display_dataframe_head_from_query(query, count=10):
"""Displays count rows from dataframe head from query.
Args:
query: str, query to be run on BigQuery, results stored in dataframe.
count: int, number of results from head of dataframe to display.
Returns:
Dataframe head with count number of results.
"""
df = bq.query(
query + " LIMIT {limit}".format(
limit=count)).to_dataframe()
return df.head(count)
###Output
_____no_output_____
###Markdown
For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results.
###Code
# Get label, features, and columns to hash and split into buckets
hash_cols_fixed_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
year,
month,
CASE
WHEN day IS NULL THEN
CASE
WHEN wday IS NULL THEN 0
ELSE wday
END
ELSE day
END AS date,
IFNULL(state, "Unknown") AS state,
IFNULL(mother_birth_state, "Unknown") AS mother_birth_state
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
"""
display_dataframe_head_from_query(hash_cols_fixed_query)
###Output
_____no_output_____
###Markdown
Using `COALESCE` would provide the same result as the nested `CASE WHEN`. This is preferable when all we want is the first non-null instance. To be precise the `CASE WHEN` would become `COALESCE(wday, day, 0) AS date`. You can read more about it [here](https://cloud.google.com/bigquery/docs/reference/standard-sql/conditional_expressions). Next query will combine our hash columns and will leave us just with our label, features, and our hash values.
###Code
data_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING),
CAST(date AS STRING),
CAST(state AS STRING),
CAST(mother_birth_state AS STRING)
)
) AS hash_values
FROM
({CTE_hash_cols_fixed})
""".format(CTE_hash_cols_fixed=hash_cols_fixed_query)
display_dataframe_head_from_query(data_query)
###Output
_____no_output_____
###Markdown
The next query is going to find the counts of each of the unique 657484 `hash_values`. This will be our first step at making actual hash buckets for our split via the `GROUP BY`.
###Code
# Get the counts of each of the unique hashs of our splitting column
first_bucketing_query = """
SELECT
hash_values,
COUNT(*) AS num_records
FROM
({CTE_data})
GROUP BY
hash_values
""".format(CTE_data=data_query)
display_dataframe_head_from_query(first_bucketing_query)
###Output
_____no_output_____
###Markdown
The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records.
###Code
# Get the number of records in each of the hash buckets
second_bucketing_query = """
SELECT
ABS(MOD(hash_values, {modulo_divisor})) AS bucket_index,
SUM(num_records) AS num_records
FROM
({CTE_first_bucketing})
GROUP BY
ABS(MOD(hash_values, {modulo_divisor}))
""".format(
CTE_first_bucketing=first_bucketing_query, modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(second_bucketing_query)
###Output
_____no_output_____
###Markdown
The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query.
###Code
# Calculate the overall percentages
percentages_query = """
SELECT
bucket_index,
num_records,
CAST(num_records AS FLOAT64) / (
SELECT
SUM(num_records)
FROM
({CTE_second_bucketing})) AS percent_records
FROM
({CTE_second_bucketing})
""".format(CTE_second_bucketing=second_bucketing_query)
display_dataframe_head_from_query(percentages_query)
###Output
_____no_output_____
###Markdown
We'll now select the range of buckets to be used in training.
###Code
# Choose hash buckets for training and pull in their statistics
train_query = """
SELECT
*,
"train" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= 0
AND bucket_index < {train_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets)
display_dataframe_head_from_query(train_query)
###Output
_____no_output_____
###Markdown
We'll do the same by selecting the range of buckets to be used evaluation.
###Code
# Choose hash buckets for validation and pull in their statistics
eval_query = """
SELECT
*,
"eval" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {train_buckets}
AND bucket_index < {cum_eval_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets,
cum_eval_buckets=train_buckets + eval_buckets)
display_dataframe_head_from_query(eval_query)
###Output
_____no_output_____
###Markdown
Lastly, we'll select the hash buckets to be used for the test split.
###Code
# Choose hash buckets for testing and pull in their statistics
test_query = """
SELECT
*,
"test" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {cum_eval_buckets}
AND bucket_index < {modulo_divisor}
""".format(
CTE_percentages=percentages_query,
cum_eval_buckets=train_buckets + eval_buckets,
modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(test_query)
###Output
_____no_output_____
###Markdown
In the below query, we'll `UNION ALL` all of the datasets together so that all three sets of hash buckets will be within one table. We added `dataset_id` so that we can sort on it in the query after.
###Code
# Union the training, validation, and testing dataset statistics
union_query = """
SELECT
0 AS dataset_id,
*
FROM
({CTE_train})
UNION ALL
SELECT
1 AS dataset_id,
*
FROM
({CTE_eval})
UNION ALL
SELECT
2 AS dataset_id,
*
FROM
({CTE_test})
""".format(CTE_train=train_query, CTE_eval=eval_query, CTE_test=test_query)
display_dataframe_head_from_query(union_query)
###Output
_____no_output_____
###Markdown
Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to the 80/10/10 that we were hoping to get.
###Code
# Show final splitting and associated statistics
split_query = """
SELECT
dataset_id,
dataset_name,
SUM(num_records) AS num_records,
SUM(percent_records) AS percent_records
FROM
({CTE_union})
GROUP BY
dataset_id,
dataset_name
ORDER BY
dataset_id
""".format(CTE_union=union_query)
display_dataframe_head_from_query(split_query)
###Output
_____no_output_____
###Markdown
Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train/eval/test sets do not overlap and takes a subsample of our global splits.
###Code
# every_n allows us to subsample from each of the hash values
# This helps us get approximately the record counts we want
every_n = 1000
splitting_string = "ABS(MOD(hash_values, {0} * {1}))".format(every_n, modulo_divisor)
def create_data_split_sample_df(query_string, splitting_string, lo, up):
"""Creates a dataframe with a sample of a data split.
Args:
query_string: str, query to run to generate splits.
splitting_string: str, modulo string to split by.
lo: float, lower bound for bucket filtering for split.
up: float, upper bound for bucket filtering for split.
Returns:
Dataframe containing data split sample.
"""
query = "SELECT * FROM ({0}) WHERE {1} >= {2} and {1} < {3}".format(
query_string, splitting_string, int(lo), int(up))
df = bq.query(query).to_dataframe()
return df
train_df = create_data_split_sample_df(
query_string, splitting_string,
lo=0, up=train_percent)
eval_df = create_data_split_sample_df(
query_string, splitting_string,
lo=train_percent, up=train_percent + eval_percent)
test_df = create_data_split_sample_df(
query_string, splitting_string,
lo=train_percent + eval_percent, up=modulo_divisor)
print("There are {} examples in the train dataset.".format(len(train_df)))
print("There are {} examples in the validation dataset.".format(len(eval_df)))
print("There are {} examples in the test dataset.".format(len(test_df)))
###Output
There are 7733 examples in the train dataset.
There are 1037 examples in the validation dataset.
There are 561 examples in the test dataset.
###Markdown
Preprocess data using PandasWe'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the `is_male` field be `Unknown`. Also, if there is more than child we'll change the `plurality` to `Multiple(2+)`. While we're at it, we'll also change the plurality column to be a string. We'll perform these operations below. Let's start by examining the training dataset as is.
###Code
train_df.head()
###Output
_____no_output_____
###Markdown
Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
###Code
train_df.describe()
###Output
_____no_output_____
###Markdown
It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a `preprocess` function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.
###Code
def preprocess(df):
""" Preprocess pandas dataframe for augmented babyweight data.
Args:
df: Dataframe containing raw babyweight data.
Returns:
Pandas dataframe containing preprocessed raw babyweight data as well
as simulated no ultrasound data masking some of the original data.
"""
# Clean up raw data
# Filter out what we don"t want to use for training
df = df[df.weight_pounds > 0]
df = df[df.mother_age > 0]
df = df[df.gestation_weeks > 0]
df = df[df.plurality > 0]
# Modify plurality field to be a string
twins_etc = dict(zip([1,2,3,4,5],
["Single(1)",
"Twins(2)",
"Triplets(3)",
"Quadruplets(4)",
"Quintuplets(5)"]))
df["plurality"].replace(twins_etc, inplace=True)
# Clone data and mask certain columns to simulate lack of ultrasound
no_ultrasound = df.copy(deep=True)
# Modify is_male
no_ultrasound["is_male"] = "Unknown"
# Modify plurality
condition = no_ultrasound["plurality"] != "Single(1)"
no_ultrasound.loc[condition, "plurality"] = "Multiple(2+)"
# Concatenate both datasets together and shuffle
return pd.concat(
[df, no_ultrasound]).sample(frac=1).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Let's process the train/eval/test set and see a small sample of the training data after our preprocessing:
###Code
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
test_df = preprocess(test_df)
train_df.head()
train_df.tail()
###Output
_____no_output_____
###Markdown
Let's look again at a summary of the dataset. Note that we only see numeric columns, so `plurality` does not show up.
###Code
train_df.describe()
###Output
_____no_output_____
###Markdown
Write to .csv files In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.
###Code
# Define columns
columns = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
# Write out CSV files
train_df.to_csv(
path_or_buf="train.csv", columns=columns, header=False, index=False)
eval_df.to_csv(
path_or_buf="eval.csv", columns=columns, header=False, index=False)
test_df.to_csv(
path_or_buf="test.csv", columns=columns, header=False, index=False)
%%bash
wc -l *.csv
%%bash
head *.csv
%%bash
tail *.csv
###Output
==> eval.csv <==
7.43839671988,False,25,Single(1),37
7.06140625186,True,34,Single(1),41
7.43619209726,True,36,Single(1),40
3.56267015392,True,35,Twins(2),31
8.811876612139999,False,27,Single(1),36
8.0689187892,Unknown,36,Single(1),40
8.7633749145,Unknown,34,Single(1),39
7.43839671988,True,43,Single(1),40
4.62529825676,Unknown,38,Multiple(2+),35
6.1839664491,Unknown,20,Single(1),38
==> test.csv <==
6.37576861704,Unknown,21,Single(1),39
7.5618555866,True,22,Single(1),39
8.99926953484,Unknown,28,Single(1),42
7.82420567838,Unknown,24,Single(1),39
9.25059651352,True,26,Single(1),40
8.62448368944,Unknown,28,Single(1),39
5.2580249487,False,18,Single(1),38
7.87491199864,True,25,Single(1),37
5.81138522632,Unknown,41,Single(1),36
6.93794738514,True,24,Single(1),40
==> train.csv <==
7.81318256528,True,18,Single(1),43
7.31273323054,False,35,Single(1),34
6.75055446244,Unknown,37,Single(1),39
7.43839671988,True,32,Single(1),39
6.9666074791999995,True,20,Single(1),38
7.25100379718,True,32,Single(1),39
8.811876612139999,True,30,Single(1),39
7.24879917456,True,26,Single(1),40
7.62578964258,Unknown,22,Single(1),40
6.4992274837599995,Unknown,22,Single(1),38
###Markdown
In the below query, we'll `UNION ALL` all of the datasets together so that all three sets of hash buckets will be within one table. We added `dataset_id` so that we can sort on it in the query after.
###Code
# Union the training, validation, and testing dataset statistics
union_query = """
SELECT
0 AS dataset_id,
*
FROM
({CTE_train})
UNION ALL
SELECT
1 AS dataset_id,
*
FROM
({CTE_eval})
UNION ALL
SELECT
2 AS dataset_id,
*
FROM
({CTE_test})
""".format(CTE_train=train_query, CTE_eval=eval_query, CTE_test=test_query)
display_dataframe_head_from_query(union_query)
###Output
_____no_output_____
###Markdown
Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to the 80/10/10 that we were hoping to get.
###Code
# Show final splitting and associated statistics
split_query = """
SELECT
dataset_id,
dataset_name,
SUM(num_records) AS num_records,
SUM(percent_records) AS percent_records
FROM
({CTE_union})
GROUP BY
dataset_id,
dataset_name
ORDER BY
dataset_id
""".format(CTE_union=union_query)
display_dataframe_head_from_query(split_query)
###Output
_____no_output_____
###Markdown
Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train/eval/test sets do not overlap and takes a subsample of our global splits.
###Code
# every_n allows us to subsample from each of the hash values
# This helps us get approximately the record counts we want
every_n = 1000
splitting_string = "ABS(MOD(hash_values, {0} * {1}))".format(every_n, modulo_divisor)
def create_data_split_sample_df(query_string, splitting_string, lo, up):
"""Creates a dataframe with a sample of a data split.
Args:
query_string: str, query to run to generate splits.
splitting_string: str, modulo string to split by.
lo: float, lower bound for bucket filtering for split.
up: float, upper bound for bucket filtering for split.
Returns:
Dataframe containing data split sample.
"""
query = "SELECT * FROM ({0}) WHERE {1} >= {2} and {1} < {3}".format(
query_string, splitting_string, int(lo), int(up))
df = bq.query(query).to_dataframe()
return df
train_df = create_data_split_sample_df(
data_query, splitting_string,
lo=0, up=train_percent)
eval_df = create_data_split_sample_df(
data_query, splitting_string,
lo=train_percent, up=train_percent + eval_percent)
test_df = create_data_split_sample_df(
data_query, splitting_string,
lo=train_percent + eval_percent, up=modulo_divisor)
print("There are {} examples in the train dataset.".format(len(train_df)))
print("There are {} examples in the validation dataset.".format(len(eval_df)))
print("There are {} examples in the test dataset.".format(len(test_df)))
###Output
There are 7733 examples in the train dataset.
There are 1037 examples in the validation dataset.
There are 561 examples in the test dataset.
###Markdown
Preprocess data using PandasWe'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the `is_male` field be `Unknown`. Also, if there is more than child we'll change the `plurality` to `Multiple(2+)`. While we're at it, we'll also change the plurality column to be a string. We'll perform these operations below. Let's start by examining the training dataset as is.
###Code
train_df.head()
###Output
_____no_output_____
###Markdown
Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
###Code
train_df.describe()
###Output
_____no_output_____
###Markdown
It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a `preprocess` function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.
###Code
def preprocess(df):
""" Preprocess pandas dataframe for augmented babyweight data.
Args:
df: Dataframe containing raw babyweight data.
Returns:
Pandas dataframe containing preprocessed raw babyweight data as well
as simulated no ultrasound data masking some of the original data.
"""
# Clean up raw data
# Filter out what we don"t want to use for training
df = df[df.weight_pounds > 0]
df = df[df.mother_age > 0]
df = df[df.gestation_weeks > 0]
df = df[df.plurality > 0]
# Modify plurality field to be a string
twins_etc = dict(zip([1,2,3,4,5],
["Single(1)",
"Twins(2)",
"Triplets(3)",
"Quadruplets(4)",
"Quintuplets(5)"]))
df["plurality"].replace(twins_etc, inplace=True)
# Clone data and mask certain columns to simulate lack of ultrasound
no_ultrasound = df.copy(deep=True)
# Modify is_male
no_ultrasound["is_male"] = "Unknown"
# Modify plurality
condition = no_ultrasound["plurality"] != "Single(1)"
no_ultrasound.loc[condition, "plurality"] = "Multiple(2+)"
# Concatenate both datasets together and shuffle
return pd.concat(
[df, no_ultrasound]).sample(frac=1).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Let's process the train/eval/test set and see a small sample of the training data after our preprocessing:
###Code
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
test_df = preprocess(test_df)
train_df.head()
train_df.tail()
###Output
_____no_output_____
###Markdown
Let's look again at a summary of the dataset. Note that we only see numeric columns, so `plurality` does not show up.
###Code
train_df.describe()
###Output
_____no_output_____
###Markdown
Write to .csv files In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.
###Code
# Define columns
columns = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
# Write out CSV files
train_df.to_csv(
path_or_buf="train.csv", columns=columns, header=False, index=False)
eval_df.to_csv(
path_or_buf="eval.csv", columns=columns, header=False, index=False)
test_df.to_csv(
path_or_buf="test.csv", columns=columns, header=False, index=False)
%%bash
wc -l *.csv
%%bash
head *.csv
%%bash
tail *.csv
###Output
==> eval.csv <==
7.43839671988,False,25,Single(1),37
7.06140625186,True,34,Single(1),41
7.43619209726,True,36,Single(1),40
3.56267015392,True,35,Twins(2),31
8.811876612139999,False,27,Single(1),36
8.0689187892,Unknown,36,Single(1),40
8.7633749145,Unknown,34,Single(1),39
7.43839671988,True,43,Single(1),40
4.62529825676,Unknown,38,Multiple(2+),35
6.1839664491,Unknown,20,Single(1),38
==> test.csv <==
6.37576861704,Unknown,21,Single(1),39
7.5618555866,True,22,Single(1),39
8.99926953484,Unknown,28,Single(1),42
7.82420567838,Unknown,24,Single(1),39
9.25059651352,True,26,Single(1),40
8.62448368944,Unknown,28,Single(1),39
5.2580249487,False,18,Single(1),38
7.87491199864,True,25,Single(1),37
5.81138522632,Unknown,41,Single(1),36
6.93794738514,True,24,Single(1),40
==> train.csv <==
7.81318256528,True,18,Single(1),43
7.31273323054,False,35,Single(1),34
6.75055446244,Unknown,37,Single(1),39
7.43839671988,True,32,Single(1),39
6.9666074791999995,True,20,Single(1),38
7.25100379718,True,32,Single(1),39
8.811876612139999,True,30,Single(1),39
7.24879917456,True,26,Single(1),40
7.62578964258,Unknown,22,Single(1),40
6.4992274837599995,Unknown,22,Single(1),38
###Markdown
LAB 4a: Creating a Sampled Dataset.**Learning Objectives**1. Setup up the environment1. Sample the natality dataset to create train/eval/test sets1. Preprocess the data in Pandas dataframe Introduction In this notebook, we'll read data from BigQuery into our notebook to preprocess the data within a Pandas dataframe for a small, repeatable sample.We will set up the environment, sample the natality dataset to create train/eval/test splits, and preprocess the data in a Pandas dataframe. Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bash
sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo pip install google-cloud-bigquery==1.6.1
###Output
google-cloud-bigquery==1.6.1
###Markdown
Import necessary libraries.
###Code
from google.cloud import bigquery
import pandas as pd
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
PROJECT = "cloud-training-demos" # Replace with your PROJECT
###Output
_____no_output_____
###Markdown
Create ML datasets by sampling using BigQueryWe'll begin by sampling the BigQuery data to create smaller datasets. Let's create a BigQuery client that we'll use throughout the lab.
###Code
bq = bigquery.Client(project = PROJECT)
###Output
_____no_output_____
###Markdown
We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash with in the modulo. Feel free to play around with these values to get the perfect combination.
###Code
modulo_divisor = 100
train_percent = 80.0
eval_percent = 10.0
train_buckets = int(modulo_divisor * train_percent / 100.0)
eval_buckets = int(modulo_divisor * eval_percent / 100.0)
###Output
_____no_output_____
###Markdown
We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows.
###Code
def display_dataframe_head_from_query(query, count=10):
"""Displays count rows from dataframe head from query.
Args:
query: str, query to be run on BigQuery, results stored in dataframe.
count: int, number of results from head of dataframe to display.
Returns:
Dataframe head with count number of results.
"""
df = bq.query(
query + " LIMIT {limit}".format(
limit=count)).to_dataframe()
return df.head(count)
###Output
_____no_output_____
###Markdown
For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results.
###Code
# Get label, features, and columns to hash and split into buckets
hash_cols_fixed_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
year,
month,
CASE
WHEN day IS NULL THEN
CASE
WHEN wday IS NULL THEN 0
ELSE wday
END
ELSE day
END AS date,
IFNULL(state, "Unknown") AS state,
IFNULL(mother_birth_state, "Unknown") AS mother_birth_state
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
"""
display_dataframe_head_from_query(hash_cols_fixed_query)
###Output
_____no_output_____
###Markdown
Using `COALESCE` would provide the same result as the nested `CASE WHEN`. This is preferable when all we want is the first non-null instance. To be precise the `CASE WHEN` would become `COALESCE(wday, day, 0) AS date`. You can read more about it [here](https://cloud.google.com/bigquery/docs/reference/standard-sql/conditional_expressions). Next query will combine our hash columns and will leave us just with our label, features, and our hash values.
###Code
data_query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING),
CAST(date AS STRING),
CAST(state AS STRING),
CAST(mother_birth_state AS STRING)
)
) AS hash_values
FROM
({CTE_hash_cols_fixed})
""".format(CTE_hash_cols_fixed=hash_cols_fixed_query)
display_dataframe_head_from_query(data_query)
###Output
_____no_output_____
###Markdown
The next query is going to find the counts of each of the unique 657484 `hash_values`. This will be our first step at making actual hash buckets for our split via the `GROUP BY`.
###Code
# Get the counts of each of the unique hashs of our splitting column
first_bucketing_query = """
SELECT
hash_values,
COUNT(*) AS num_records
FROM
({CTE_data})
GROUP BY
hash_values
""".format(CTE_data=data_query)
display_dataframe_head_from_query(first_bucketing_query)
###Output
_____no_output_____
###Markdown
The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records.
###Code
# Get the number of records in each of the hash buckets
second_bucketing_query = """
SELECT
ABS(MOD(hash_values, {modulo_divisor})) AS bucket_index,
SUM(num_records) AS num_records
FROM
({CTE_first_bucketing})
GROUP BY
ABS(MOD(hash_values, {modulo_divisor}))
""".format(
CTE_first_bucketing=first_bucketing_query, modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(second_bucketing_query)
###Output
_____no_output_____
###Markdown
The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query.
###Code
# Calculate the overall percentages
percentages_query = """
SELECT
bucket_index,
num_records,
CAST(num_records AS FLOAT64) / (
SELECT
SUM(num_records)
FROM
({CTE_second_bucketing})) AS percent_records
FROM
({CTE_second_bucketing})
""".format(CTE_second_bucketing=second_bucketing_query)
display_dataframe_head_from_query(percentages_query)
###Output
_____no_output_____
###Markdown
We'll now select the range of buckets to be used in training.
###Code
# Choose hash buckets for training and pull in their statistics
train_query = """
SELECT
*,
"train" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= 0
AND bucket_index < {train_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets)
display_dataframe_head_from_query(train_query)
###Output
_____no_output_____
###Markdown
We'll do the same by selecting the range of buckets to be used evaluation.
###Code
# Choose hash buckets for validation and pull in their statistics
eval_query = """
SELECT
*,
"eval" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {train_buckets}
AND bucket_index < {cum_eval_buckets}
""".format(
CTE_percentages=percentages_query,
train_buckets=train_buckets,
cum_eval_buckets=train_buckets + eval_buckets)
display_dataframe_head_from_query(eval_query)
###Output
_____no_output_____
###Markdown
Lastly, we'll select the hash buckets to be used for the test split.
###Code
# Choose hash buckets for testing and pull in their statistics
test_query = """
SELECT
*,
"test" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {cum_eval_buckets}
AND bucket_index < {modulo_divisor}
""".format(
CTE_percentages=percentages_query,
cum_eval_buckets=train_buckets + eval_buckets,
modulo_divisor=modulo_divisor)
display_dataframe_head_from_query(test_query)
###Output
_____no_output_____
###Markdown
In the below query, we'll `UNION ALL` all of the datasets together so that all three sets of hash buckets will be within one table. We added `dataset_id` so that we can sort on it in the query after.
###Code
# Union the training, validation, and testing dataset statistics
union_query = """
SELECT
0 AS dataset_id,
*
FROM
({CTE_train})
UNION ALL
SELECT
1 AS dataset_id,
*
FROM
({CTE_eval})
UNION ALL
SELECT
2 AS dataset_id,
*
FROM
({CTE_test})
""".format(CTE_train=train_query, CTE_eval=eval_query, CTE_test=test_query)
display_dataframe_head_from_query(union_query)
###Output
_____no_output_____
###Markdown
Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to the 80/10/10 that we were hoping to get.
###Code
# Show final splitting and associated statistics
split_query = """
SELECT
dataset_id,
dataset_name,
SUM(num_records) AS num_records,
SUM(percent_records) AS percent_records
FROM
({CTE_union})
GROUP BY
dataset_id,
dataset_name
ORDER BY
dataset_id
""".format(CTE_union=union_query)
display_dataframe_head_from_query(split_query)
###Output
_____no_output_____
###Markdown
Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train/eval/test sets do not overlap and takes a subsample of our global splits.
###Code
# every_n allows us to subsample from each of the hash values
# This helps us get approximately the record counts we want
every_n = 1000
splitting_string = "ABS(MOD(hash_values, {0} * {1}))".format(every_n, modulo_divisor)
def create_data_split_sample_df(query_string, splitting_string, lo, up):
"""Creates a dataframe with a sample of a data split.
Args:
query_string: str, query to run to generate splits.
splitting_string: str, modulo string to split by.
lo: float, lower bound for bucket filtering for split.
up: float, upper bound for bucket filtering for split.
Returns:
Dataframe containing data split sample.
"""
query = "SELECT * FROM ({0}) WHERE {1} >= {2} and {1} < {3}".format(
query_string, splitting_string, int(lo), int(up))
df = bq.query(query).to_dataframe()
return df
train_df = create_data_split_sample_df(
data_query, splitting_string,
lo=0, up=train_percent)
eval_df = create_data_split_sample_df(
data_query, splitting_string,
lo=train_percent, up=train_percent + eval_percent)
test_df = create_data_split_sample_df(
data_query, splitting_string,
lo=train_percent + eval_percent, up=modulo_divisor)
print("There are {} examples in the train dataset.".format(len(train_df)))
print("There are {} examples in the validation dataset.".format(len(eval_df)))
print("There are {} examples in the test dataset.".format(len(test_df)))
###Output
There are 7733 examples in the train dataset.
There are 1037 examples in the validation dataset.
There are 561 examples in the test dataset.
###Markdown
Preprocess data using PandasWe'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the `is_male` field be `Unknown`. Also, if there is more than child we'll change the `plurality` to `Multiple(2+)`. While we're at it, we'll also change the plurality column to be a string. We'll perform these operations below. Let's start by examining the training dataset as is.
###Code
train_df.head()
###Output
_____no_output_____
###Markdown
Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
###Code
train_df.describe()
###Output
_____no_output_____
###Markdown
It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a `preprocess` function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.
###Code
def preprocess(df):
""" Preprocess pandas dataframe for augmented babyweight data.
Args:
df: Dataframe containing raw babyweight data.
Returns:
Pandas dataframe containing preprocessed raw babyweight data as well
as simulated no ultrasound data masking some of the original data.
"""
# Clean up raw data
# Filter out what we don"t want to use for training
df = df[df.weight_pounds > 0]
df = df[df.mother_age > 0]
df = df[df.gestation_weeks > 0]
df = df[df.plurality > 0]
# Modify plurality field to be a string
twins_etc = dict(zip([1,2,3,4,5],
["Single(1)",
"Twins(2)",
"Triplets(3)",
"Quadruplets(4)",
"Quintuplets(5)"]))
df["plurality"].replace(twins_etc, inplace=True)
# Clone data and mask certain columns to simulate lack of ultrasound
no_ultrasound = df.copy(deep=True)
# Modify is_male
no_ultrasound["is_male"] = "Unknown"
# Modify plurality
condition = no_ultrasound["plurality"] != "Single(1)"
no_ultrasound.loc[condition, "plurality"] = "Multiple(2+)"
# Concatenate both datasets together and shuffle
return pd.concat(
[df, no_ultrasound]).sample(frac=1).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Let's process the train/eval/test set and see a small sample of the training data after our preprocessing:
###Code
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
test_df = preprocess(test_df)
train_df.head()
train_df.tail()
###Output
_____no_output_____
###Markdown
Let's look again at a summary of the dataset. Note that we only see numeric columns, so `plurality` does not show up.
###Code
train_df.describe()
###Output
_____no_output_____
###Markdown
Write to .csv files In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.
###Code
# Define columns
columns = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
# Write out CSV files
train_df.to_csv(
path_or_buf="train.csv", columns=columns, header=False, index=False)
eval_df.to_csv(
path_or_buf="eval.csv", columns=columns, header=False, index=False)
test_df.to_csv(
path_or_buf="test.csv", columns=columns, header=False, index=False)
%%bash
wc -l *.csv
%%bash
head *.csv
%%bash
tail *.csv
###Output
==> eval.csv <==
7.43839671988,False,25,Single(1),37
7.06140625186,True,34,Single(1),41
7.43619209726,True,36,Single(1),40
3.56267015392,True,35,Twins(2),31
8.811876612139999,False,27,Single(1),36
8.0689187892,Unknown,36,Single(1),40
8.7633749145,Unknown,34,Single(1),39
7.43839671988,True,43,Single(1),40
4.62529825676,Unknown,38,Multiple(2+),35
6.1839664491,Unknown,20,Single(1),38
==> test.csv <==
6.37576861704,Unknown,21,Single(1),39
7.5618555866,True,22,Single(1),39
8.99926953484,Unknown,28,Single(1),42
7.82420567838,Unknown,24,Single(1),39
9.25059651352,True,26,Single(1),40
8.62448368944,Unknown,28,Single(1),39
5.2580249487,False,18,Single(1),38
7.87491199864,True,25,Single(1),37
5.81138522632,Unknown,41,Single(1),36
6.93794738514,True,24,Single(1),40
==> train.csv <==
7.81318256528,True,18,Single(1),43
7.31273323054,False,35,Single(1),34
6.75055446244,Unknown,37,Single(1),39
7.43839671988,True,32,Single(1),39
6.9666074791999995,True,20,Single(1),38
7.25100379718,True,32,Single(1),39
8.811876612139999,True,30,Single(1),39
7.24879917456,True,26,Single(1),40
7.62578964258,Unknown,22,Single(1),40
6.4992274837599995,Unknown,22,Single(1),38
|
treedlib/treedlib.ipynb
|
###Markdown
TreeDLib
###Code
%load_ext autoreload
%autoreload 2
%load_ext sql
#from treedlib import *
# Note: reloading for submodules doesn't work, so we load directly here
from treedlib.util import *
from treedlib.structs import *
from treedlib.templates import *
from treedlib.features import *
import lxml.etree as et
import numpy as np
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
The sql extension is already loaded. To reload it, use:
%reload_ext sql
###Markdown
We define three classes of operators:* _NodeSets:_ $S : 2^T \mapsto 2^T$* _Indicators:_ $I : 2^T \mapsto \{0,1\}^F$* _Combinators:_ $C : \{0,1\}^F \times \{0,1\}^F \mapsto \{0,1\}^F$where $T$ is a given input tree, and $F$ is the dimension of the feature space. Binning
###Code
%sql postgresql://ajratner@localhost:6432/genomics_ajratner2
res_seq = %sql SELECT * FROM genepheno_features WHERE feature LIKE '%SEQ%'
res_dep = %sql SELECT * FROM genepheno_features WHERE feature NOT LIKE '%SEQ%'
%matplotlib inline
import matplotlib.pyplot as plt
seq_lens = [len(rs.feature.split('_')) for rs in res_seq]
n, bins, patches = plt.hist(seq_lens, 50, normed=1, facecolor='green', alpha=0.75)
print [np.percentile(seq_lens, p) for p in [25,50,75]]
n, bins, patches = plt.hist(dep_lens, 50, normed=1, facecolor='green', alpha=0.75)
dep_lens = [len(rs.feature.split('_')) for rs in res_dep]
print [np.percentile(dep_lens, p) for p in [25,50,75]]
###Output
_____no_output_____
###Markdown
Adding new feature types...
###Code
ds = {'GENE': ['TFB1M']}
gen_feats = compile_relation_feature_generator()
for f in gen_feats(xt.root, gidxs, pidxs):
print f
###Output
_____no_output_____
###Markdown
Genomics Debugging Pipeline* Fix this!* _Features to add:_ * modifiers of VBs in between * candidates in between? * Better way to do siblings, when siblings have children...? * LeftAll / RightAll * Also, get unigrams, etc. * **Use wildcard, e.g. "ABC now causes" --> WORD:LEFT-OF-MENTION[?_causes]**? * Modifiers before e.g. "We investigated whether..." / NEGATIONS (see Johannes's email / list) * Handle negation words explicitly?
###Code
from random import shuffle
RESULTS_ROOT = '/lfs/raiders7/hdd/ajratner/dd-genomics/alex-results/'
def get_exs(header, rel_path, root=RESULTS_ROOT):
rids = []
in_section = False
with open(root + rel_path, 'rb') as f:
for line in f:
if in_section and len(line.strip()) == 0:
break
elif in_section:
rids.append('_'.join(map(lambda x : x[0].upper() + x[1:], line.strip().split('_'))))
elif line.strip() == header:
in_section = True
return rids
false_pos = get_exs('False Positives:', '02-01-16/stats_causation_1500.tsv')
false_negs = get_exs('False Negatives:', '02-01-16/stats_causation_1500.tsv')
#shuffle(false_pos)
#shuffle(false_negs)
#relation_id = false_negs[12]
#print relation_id
#relation_id = '20396601_Body.0_287_0_20396601_Body.0_287_25'
relation_id = '18697824_Abstract.0_1_24_18697824_Abstract.0_1_6'
# Connect to correct db
%sql postgresql://ajratner@localhost:6432/genomics_ajratner
# Getting the component IDs
id = relation_id.split('_')
doc_id = id[0]
section_id = id[1][0].upper() + id[1][1:]
sent_id = int(id[2])
gidxs = map(int, relation_id.split(doc_id)[1].strip('_').split('_')[-1].split('-'))
pidxs = map(int, relation_id.split(doc_id)[2].strip('_').split('_')[-1].split('-'))
cids = [gidxs, pidxs]
# Get sentence from db + convert to XMLTree
res = %sql SELECT words, lemmas, poses, ners, dep_paths AS "dep_labels", dep_parents FROM sentences_input WHERE doc_id = :doc_id AND section_id = :section_id AND sent_id = :sent_id;
rows = [dict((k, v.split('|^|')) for k,v in dict(row).iteritems()) for row in res]
xts = map(corenlp_to_xmltree, rows)
xt = xts[0]
# Show XMLTree
xt.render_tree(highlight=[gidxs, pidxs])
# Print TreeDLib features
#print_gen(get_relation_features(xt.root, gidxs, pidxs))
RightNgrams(RightSiblings(Mention(0)), 'lemma').print_apply(xt.root, cids)
seen.add("blah")
"blah" in seen
dict_sub = compile_dict_sub(brown_clusters_path="clusters_VB_NN.lemma.tsv")
Ngrams(Between(Mention(0), Mention(1)), 'word', 2).print_apply(xt.root, cids, dict_sub=dict_sub)
xt.root.xpath("//*[@dep_label='dobj']/@word")
Indicator(Between(Mention(0), Mention(1)), 'dep_label').print_apply(xt.root, cids)
Ngrams(Between(Mention(0), Mention(1)), 'word', 2).print_apply(xt.root, cids)
dict_sub = compile_dict_sub([
('FOUND', set(['found', 'identified', 'discovered'])),
('CAUSES', set(['causes']))
])
Ngrams(Between(Mention(0), Mention(1)), 'word', 2).print_apply(xt.root, cids, dict_sub=dict_sub)
Ngrams(Children(Filter(Between(Mention(0), Mention(1)), 'pos', 'VB')), 'word', 1).print_apply(xt.root, cids)
Ngrams(Children(Filter(Between(Mention(0), Mention(1)), 'pos', 'VB')), 'word', 1).print_apply(xt.root, cids)
###Output
_____no_output_____
###Markdown
Error analysis round 4 False negatives:* [0] `24065538_Abstract.0_2_8_24065538_Abstract.0_2_14`: * **Should this be association instead?** * "... have been found... however studies of the association between ... and OSA risk have reported inconsistent findings"* [1] `8844207_Abstract.0_5_6_8844207_Abstract.0_5_1`: * **"known {{G}} mutations"*** [2] `24993959_Abstract.1_3_36_24993959_Abstract.1_3_46`: * `UnicodeDecodeError`! * [3] `22653594_Abstract.0_1_5_22653594_Abstract.0_1_25-26-27`: * **Incorrectly labeled** * [4] `21282350_Abstract.0_1_13_21282350_Abstract.0_1_20`: * `UnicodeDecodeError`!* [5] `11461952_Abstract.0_10_8_11461952_Abstract.0_10_15-16`: * "This study deomstrates that ... can be responsible for ..." * "{{G}} responsible for {{P}}" * [6] `25110572_Body.0_103_42_25110572_Body.0_103_18-19`: * **Incorrectly labeled??** * [7] `22848613_Body.0_191_7_22848613_Body.0_191_15`: * **Incorrectly labeled??*** [8] `19016241_Abstract.0_2_29_19016241_Abstract.0_2_34-35`: * **Incorrectly labeled??** * "weakly penetrant" * [9] `19877056_Abstract.0_2_37_19877056_Abstract.0_2_7`: * **"{{P}} attributable to {{G}}"*** [10] `11079449_Abstract.0_5_48_11079449_Abstract.0_5_41`: * **_Tough example: ref to a list of pairs!_*** [11] `11667976_Body.0_6_31_11667976_Body.0_6_34-35`: * **Is this correctly labeled...?*** [12] `11353725_Abstract.0_7_13_11353725_Abstract.0_7_9`: * **Is this correctly labeled...?*** [13] `20499351_Body.0_120_6_20499351_Body.0_120_10-11-12`: * "Patients homozygous for {{g}} mutation had" * "had" on path between * [14] `10511432_Abstract.0_1_12_10511432_Abstract.0_1_23`: * **Incorrectly labeled...??** * [15] `17033686_Abstract.0_4_4_17033686_Abstract.0_4_12`: * "misense mutation in {{G}} was described in a family with {{P}}" * **_Incorrectly labeled...?_*** [16] `23288328_Body.0_179_20_23288328_Body.0_179_24-25`: * **{{G}} - related {{P}}*** [17] `21203343_Body.0_127_4_21203343_Body.0_127_19`: * "have been reported in"- **Incorrectly labeled?*** [18] `9832037_Abstract.0_2_13_9832037_Abstract.0_2_26-27-28`: * "{{G}} sympotms include {{P}}", "include"* [19] `18791638_Body.0_8_6_18791638_Body.0_8_0`: * "{{P}} results from {{G}}"
###Code
%%sql
-- Get the features + weights for an example
SELECT f.feature, w.weight
FROM
genepheno_features f,
dd_inference_result_variables_mapped_weights w
WHERE
f.relation_id = :relation_id
AND w.description = 'inf_istrue_genepheno_causation_inference--' || f.feature
ORDER BY w.weight DESC;
res = _
sum(r[1] for r in res)
%sql SELECT expectation FROM genepheno_causation_inference_label_inference WHERE relation_id = :relation_id;
###Output
_____no_output_____
###Markdown
Error analysis round 3 False Positives:* [0] `18478198_Abstract.0_2_29_18478198_Abstract.0_2_11-12`: * "our aim was to establish whether"* [1] `17508172_Abstract.0_4_21_17508172_Abstract.0_4_32`: * "role" * "sodium ion channel" * [2] `19561293_Abstract.0_3_7_19561293_Abstract.0_3_10-11`: * "are currently unknown"* [3] `19956409_Abstract.0_1_8_19956409_Abstract.0_1_21`: * r'^To evaluate' * "the possible role" * [4] `19714249_Body.0_130_10_19714249_Body.0_130_18`: * '^Although" * "potential role" * "needs to be replicated" * "suggests", "possible", "role" * [5] `16297188_Title.0_1_5_16297188_Title.0_1_14`: * "role" * **Incorrectly supervised...?** * [6] `24412566_Body.0_70_72_24412566_Body.0_70_6`: * **_Long one with other genes in between..._*** [7] `16837472_Abstract.3_1_19_16837472_Abstract.3_1_10`: * "needs to be further studied" * "associated"* [8] `14966353_Abstract.0_1_41_14966353_Abstract.0_1_5`: * `UnicodeError`!* [9] `15547491_Abstract.0_1_23_15547491_Abstract.0_1_7-8-9-10`: * r'^To analyze' Error analysis round 2With new DSR code: False Positives* [0] `17183713_Body.0_111_12_17183713_Body.0_111_25`: * **"unlikely" on path between*** [1] `19561293_Abstract.0_3_7_19561293_Abstract.0_3_10-11`: * _"are unknown"- not on dep path between..._ * **Labeling error- doesn't this imply that there is a causal relation??*** [2] `17167409_Abstract.3_2_5_17167409_Abstract.3_2_13`: * **"is _not_ a common cause of..." - NEG modifying primary VB on path between!!!** * [3] `18538017_Body.0_12_5_18538017_Body.0_12_17`: * **Labeling error!? (marked because only partial P...?)*** [4] `20437121_Abstract.0_1_30_20437121_Abstract.0_1_15`: * "to determine" - in phrase between * [5] `10435725_Abstract.0_1_14_10435725_Abstract.0_1_20`: * "in mice" - off the main VB * [6] `23525542_Abstract.0_7_12_23525542_Abstract.0_7_24`: * **is _not_ due to..."- NEG modifying primary VB on path between!!!*** [7] `19995275_Abstract.0_1_2_19995275_Abstract.0_1_18`: * "has been implicated... in various studies with conflicting results" False Negatives* [0] `23874215_Body.0_172_3_23874215_Body.0_172_23-24-25-26`: * "role", "detected" - dep path between* [1] `17507029_Abstract.0_2_13_17507029_Abstract.0_2_6-7-8-9-10`: * "caused by" but also "association"... should do dep path in between...? * _a tough one..._* [2] `15219231_Body.0_121_8_15219231_Body.0_121_35`: * **Incorrect label** * [3] `25110572_Body.0_103_42_25110572_Body.0_103_18-19`: * **Incorrect label- should be association?** * [4] `17909190_Abstract.0_3_16_17909190_Abstract.0_3_25`: * **Incorrectly labeled...?** * [5] `22803640_Abstract.0_3_14_22803640_Abstract.0_3_24-25`: * **Incorrectly labeled- should be association?** * [6] `11170071_Abstract.0_1_3_11170071_Abstract.0_1_21`: * **Incorrectly labeled- wrong mention** * [7] `10511432_Abstract.0_1_12_10511432_Abstract.0_1_23`: * "A variety of mutations have been detected in patients with..."- should this be association? * [8] `10797440_Abstract.0_3_16_10797440_Abstract.0_3_3`: * _This one seems like should be straight-forwards..._ * **{{P}} are due to {{G}}*** [9] `23275784_Body.0_82_29_23275784_Body.0_82_13`: * _This one seems like should be straight-forwards..._ * **{{P}} result of / due to mutations in {{G}}**
###Code
# Filler
###Output
_____no_output_____
###Markdown
To investigate:1. Correlation with length of sentence? - **_No._**2. Low-MI words like '\_', 'the', 'gene'?3. _[tdl] Include sequence patterns too?_ FNs / recall analysis notes* `10982191_Title.0_1_8_10982191_Title.0_1_21-22-23`: * Shorter sentence * neg. weight from "gene" in between... is this just super common?* `19353431_Abstract.0_2_12_19353431_Abstract.0_2_1`: * Shorter sentence * neg. weight from "gene" in between... is this just super common?* `23285148_Body.0_4_32_23285148_Body.0_4_3`: * **Incorrectly labeled: should be false*** `23316347_Body.0_202_25_23316347_Body.0_202_54`: * _Longer sentence..._ * **BUG: Missing a left-of-mention (G: "mutation")!** * neg. weight from "\_" in betweeen * **BUG: left-of-mention[delay] happens twice!** * A lot of negative weight from "result"...? * `21304894_Body.0_110_4_21304894_Body.0_110_9-10-11`: * Shorter sentence * A lot of negative weight from "result"...? * **Is this just from a low-quality DSR?** * Duplicated features again! * `21776272_Body.0_60_46_21776272_Body.0_60_39-40`: * Longer sentence * A slightly tougher example: an inherited disorder ... with mutations in gene... * neg. weight from "gene" in between... is this just super common?* `19220582_Abstract.0_2_20_19220582_Abstract.0_2_5`: * 'We identified a mutation in a family with...' - should this be a positive example?? * neg. weight from "gene" in between... is this just super common? * neg. weight from "identify" and "affect"...? * **'c. mutation' - mutation doesn't get picked up as it's a child off the path...*** `23456818_Body.0_148_9_23456818_Body.0_148_21-22`: * `LEMMA:PARENTS-OF-BETWEEN-MENTION-and-MENTION[determine]` has huge negative weight * gene, patient, distribution, etc. - neg weight * negative impact from `PARENTS OF`...* `20429427_Abstract.0_1_2_20429427_Abstract.0_1_14`: * **Key word like "mutation" is off main path... ("responsible -> mutation -> whose")** * **STOPWORDS: "the"** * **BUG: dep_path labels are all None...**, **BUG: left-siblings doubled*** `21031598_Body.0_24_25_21031598_Body.0_24_9`: * Need a feature like `direct parent of mention` * NEG: 'site', 'gene' * `INV_`* `22670894_Title.0_1_16_22670894_Title.0_1_7-8`: * NEG: 'the', 'gene', 'locus' * **'due to' just dropped from the dep tree!*** `22887726_Abstract.0_5_33_22887726_Abstract.0_5_54-55`: * **Incorrectly labeled for causation?*** `19641605_Abstract.0_3_14_19641605_Abstract.0_3_22`: * This one has "cause", exp = 0.89, seems like dead match... * **BUG: doubles of stuff!!!!!*** `23879989_Abstract.0_1_3_23879989_Abstract.0_1_12-13`: * This one has "cause", exp = 0.87, seems like dead match... * **BUG: doubles of stuff!!!!!** * `LEMMA:FILTER-BY(pos=NN):BETWEEN-MENTION-and-MENTION[_]` * 'distinct', 'mutation _ cause'... * **_Why does '\_' have such negative weight??_*** `21850180_Body.0_62_14_21850180_Body.0_62_26-27`: * This one again seems like should be a dead match... * **BUG: Double of word "three"!** * Key word "responsible" not included...? * NEG: 'identify', 'i.e.', '_ _ _'* `20683840_Abstract.0_4_12_20683840_Abstract.0_4_33`: * UnicodeError!* `17495019_Title.0_1_5_17495019_Title.0_1_18`: * **Incorrectly labeled for causation?** * _Why is '% patients' positive...?_* `18283249_Abstract.0_3_2_18283249_Abstract.0_3_16-17-18`: * **'are one of the factors' - is this correctly labeled for causation?*** `21203343_Body.0_10_3_21203343_Body.0_10_20`: * **'are described in...' - this at least seems on the border of "causation"** * expectation 0.85 * **BUG: doubles** * NEG: `_`* `24312213_Body.0_110_66_24312213_Body.0_110_73`: * **Interesting example of isolated subtree which should be direct match!** * Expectation 0.42??? * NEG: 'mutation result', `_`, 'result', 'influence' Final tally:* 55%: Negative weight from features that seem like they should be stop words* 25%: Incorrectly labeled or on the border* 40%: Bug of some sort in TreeDLib* 30%: Features that seems suprisingly weighted- due to low-quality DSRs? TODO:1. Fix bugs in treedlib - DONE2. Filter "stopwords" i.e. low-Chi-squared features - DONE3. Add manual weights to DSRs in `config.py` Testing the low-Chi-squared hypothesis
###Code
%sql SELECT COUNT(*) FROM genepheno_features;
%sql SELECT COUNT(DISTINCT(feature)) FROM genepheno_features;
%%sql
SELECT
gc.is_correct, COUNT(*)
FROM
genepheno_causation gc,
genepheno_features gf
WHERE
gc.relation_id = gf.relation_id
AND gf.feature LIKE '%the%'
GROUP BY
gc.is_correct;
%sql SELECT is_correct, COUNT(*) FROM genepheno_causation GROUP BY is_correct;
P_T = 40022.0/(116608.0+40022.0)
P_F = 116608.0/(116608.0+40022.0)
print P_T
print P_F
from collections import defaultdict
feats = defaultdict(lambda : [0,0])
with open('/lfs/raiders7/hdd/ajratner/dd-genomics/alex-results/chi-sq/chi-sq-gp.tsv', 'rb') as f:
for line in f:
feat, label, count = line.split('\t')
b = 0 if label == 't' else 1
feats[feat][b] = int(count)
feats['INV_DEP_LABEL:BETWEEN-MENTION-and-MENTION[nsubj_vmod_prepc_by]']
chi_sqs = []
for feat, counts in feats.iteritems():
total = float(counts[0] + counts[1])
chi_sqs.append([
(P_T-(counts[0]/total))**2 + (P_F-(counts[1]/total))**2,
feat
])
chi_sqs.sort()
with open('/lfs/raiders7/hdd/ajratner/dd-genomics/alex-results/chi-sq/chi-sq-gp-computed.tsv', 'wb') as f:
for x in chi_sqs:
f.write('\t'.join(map(str, x[::-1]))+'\n')
len(chi_sqs)
chi_sqs[500000]
thes = filter(lambda x : 'the' in x[1], chi_sqs)
len(thes)
thes[:100]
###Output
_____no_output_____
###Markdown
Testing the length-bias hypothesisIs their a bias towards longer sentences (because more high-weight keywords?)
###Code
rows = []
with open('/lfs/raiders7/hdd/ajratner/dd-genomics/alex-results/test-len-corr/all_rel_sents.tsv', 'rb') as f:
for line in f:
r = line.rstrip().split('\t')
rows.append([float(r[1]), len(r[2].split('|^|'))])
print len(rows)
from scipy.stats import pearsonr
exps, lens = zip(*filter(lambda r : r[0] > 0.7, rows))
pearsonr(exps, lens)
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import random
exps, lens = zip(*random.sample(filter(lambda r : r[0] > 0.5, rows), 1000))
plt.scatter(lens, exps)
###Output
_____no_output_____
###Markdown
Debugging pipelineWe'll debug here, also to show the general most current procedure for debugging treedlib on examples in a SQL database (e.g. from DeepDive)
###Code
%sql postgresql://ajratner@localhost:5432/deepdive_spouse
%%sql
SELECT sentence_text
FROM sentences
WHERE doc_id = '79205745-b593-4b98-8a94-da6b8238fefc' AND sentence_index = 32;
res = %sql SELECT tokens AS "words", lemmas, pos_tags, ner_tags, dep_types AS "dep_labels", dep_tokens AS "dep_parents" FROM sentences WHERE doc_id = '79205745-b593-4b98-8a94-da6b8238fefc' AND sentence_index = 32;
xts = map(corenlp_to_xmltree, res)
xt = xts[0]
xt.render_tree(highlight=[[21,22], [33,34]])
print_gen(get_relation_features(xt.root, [21,22], [33,34]))
###Output
_____no_output_____
###Markdown
Feature focus: Preceding statements which nullify or negate meaningExample:> _Ex1:_ To investigate whether mutations in the SURF1 gene are a cause of Charcot-Marie-Tooth -LRB- CMT -RRB- disease> _Ex2:_ To investigate the genetic effect of a new mutation found in exon 17 of the myophosphorylase -LRB- PYGM -RRB- gene as a cause of McArdle disease -LRB- also known as type 5 glycogenosis -RRB-.Notes:* These seem to mostly be **_modifiers of the primary verb_**? * We are only sampling from a limited set of patterns of sentences (due to narrow DSR set) currently...* Modifiers in general...?* _I know how RNNs claim to / do handle this phenomenon..._ *
###Code
%%sql
SELECT relation_id
FROM genepheno_causation
WHERE doc_id = '15262743' AND section_id = 'Abstract.0' AND sent_id = 1;
ex1_id = ('24027061', 'Abstract.0', 1)
ex1_raw="""
<node dep_parent="0" lemma="investigate" ner="O" pos="VB" word="investigate" word_idx="1"><node dep_parent="2" dep_path="aux" lemma="to" ner="O" pos="TO" word="To" word_idx="0"/><node dep_parent="2" dep_path="ccomp" lemma="cause" ner="O" pos="NN" word="cause" word_idx="10"><node dep_parent="11" dep_path="mark" lemma="whether" ner="O" pos="IN" word="whether" word_idx="2"/><node dep_parent="11" dep_path="nsubj" lemma="mutation" ner="O" pos="NNS" word="mutations" word_idx="3"><node dep_parent="4" dep_path="prep_in" lemma="gene" ner="O" pos="NN" word="gene" word_idx="7"><node dep_parent="8" dep_path="det" lemma="the" ner="O" pos="DT" word="the" word_idx="5"/><node dep_parent="8" dep_path="nn" lemma="surf1" ner="O" pos="NN" word="SURF1" word_idx="6"/></node></node><node dep_parent="11" dep_path="cop" lemma="be" ner="O" pos="VBP" word="are" word_idx="8"/><node dep_parent="11" dep_path="det" lemma="a" ner="O" pos="DT" word="a" word_idx="9"/><node dep_parent="11" dep_path="prep_of" lemma="Charcot-Marie-Tooth" ner="O" pos="NNP" word="Charcot-Marie-Tooth" word_idx="12"/><node dep_parent="11" dep_path="dep" lemma="disease" ner="O" pos="NN" word="disease" word_idx="16"><node dep_parent="17" dep_path="appos" lemma="CMT" ner="O" pos="NNP" word="CMT" word_idx="14"/></node></node></node>
"""
xt1 = XMLTree(et.fromstring(ex1_raw))
ex2_id = ('15262743', 'Abstract.0', 1)
ex2_raw="""
<node dep_parent="0" lemma="investigate" ner="O" pos="VB" word="investigate" word_idx="1"><node dep_parent="2" dep_path="aux" lemma="to" ner="O" pos="TO" word="To" word_idx="0"/><node dep_parent="2" dep_path="dobj" lemma="effect" ner="O" pos="NN" word="effect" word_idx="4"><node dep_parent="5" dep_path="det" lemma="the" ner="O" pos="DT" word="the" word_idx="2"/><node dep_parent="5" dep_path="amod" lemma="genetic" ner="O" pos="JJ" word="genetic" word_idx="3"/><node dep_parent="5" dep_path="prep_of" lemma="mutation" ner="O" pos="NN" word="mutation" word_idx="8"><node dep_parent="9" dep_path="det" lemma="a" ner="O" pos="DT" word="a" word_idx="6"/><node dep_parent="9" dep_path="amod" lemma="new" ner="O" pos="JJ" word="new" word_idx="7"/><node dep_parent="9" dep_path="vmod" lemma="find" ner="O" pos="VBN" word="found" word_idx="9"><node dep_parent="10" dep_path="prep_in" lemma="exon" ner="O" pos="NN" word="exon" word_idx="11"><node dep_parent="12" dep_path="num" lemma="17" ner="NUMBER" pos="CD" word="17" word_idx="12"/><node dep_parent="12" dep_path="prep_of" lemma="gene" ner="O" pos="NN" word="gene" word_idx="19"><node dep_parent="20" dep_path="det" lemma="the" ner="O" pos="DT" word="the" word_idx="14"/><node dep_parent="20" dep_path="nn" lemma="myophosphorylase" ner="O" pos="NN" word="myophosphorylase" word_idx="15"/><node dep_parent="20" dep_path="nn" lemma="pygm" ner="O" pos="NN" word="PYGM" word_idx="17"/></node></node><node dep_parent="10" dep_path="prep_as" lemma="cause" ner="O" pos="NN" word="cause" word_idx="22"><node dep_parent="23" dep_path="det" lemma="a" ner="O" pos="DT" word="a" word_idx="21"/><node dep_parent="23" dep_path="prep_of" lemma="disease" ner="O" pos="NN" word="disease" word_idx="25"><node dep_parent="26" dep_path="nn" lemma="McArdle" ner="PERSON" pos="NNP" word="McArdle" word_idx="24"/><node dep_parent="26" dep_path="vmod" lemma="know" ner="O" pos="VBN" word="known" word_idx="28"><node dep_parent="29" dep_path="advmod" lemma="also" ner="O" pos="RB" word="also" word_idx="27"/><node dep_parent="29" dep_path="prep_as" lemma="glycogenosis" ner="O" pos="NN" word="glycogenosis" word_idx="32"><node dep_parent="33" dep_path="nn" lemma="type" ner="O" pos="NN" word="type" word_idx="30"/><node dep_parent="33" dep_path="num" lemma="5" ner="NUMBER" pos="CD" word="5" word_idx="31"/></node></node></node></node></node></node></node></node>
"""
xt2 = XMLTree(et.fromstring(ex2_raw))
xt1.render_tree()
xt2.render_tree()
###Output
_____no_output_____
###Markdown
Testing XML speedsHow does it compare between:* parse to XML via this python code, store as string, then parse from string at runtime* just parse to XML at runtime via this python code?
###Code
# Map sentence to xmltree
%time xts = map(corenlp_to_xmltree, rows)
# Pre-process to xml string
xmls = [xt.to_str() for xt in map(corenlp_to_xmltree, rows)]
# Parse @ runtime using lxml
%time roots = map(et.fromstring, xmls)
###Output
_____no_output_____
###Markdown
Table example
###Code
# Some wishful thinking...
table_xml = """
<div class="table-wrapper">
<h3>Causal genomic relationships</h3>
<table>
<tr><th>Gene</th><th>Variant</th><th>Phenotype</th></tr>
<tr><td>ABC</td><td><i>AG34</i></td><td>Headaches during defecation</td></tr>
<tr><td>BDF</td><td><i>CT2</i></td><td>Defecation during headaches</td></tr>
<tr><td>XYG</td><td><i>AT456</i></td><td>Defecasomnia</td></tr>
</table>
</div>
"""
from IPython.core.display import display_html, HTML
display_html(HTML(table_xml))
###Output
_____no_output_____
|
Notebooks/Network_analysis/.ipynb_checkpoints/Global_hits_overlap-checkpoint.ipynb
|
###Markdown
Generate an order for the regions
###Code
region_global = pd.DataFrame(columns=['chr_idx', 'left', 'region'])
for r in np.unique(global_hits_df.region.values):
region_global_series = pd.Series()
chr_name, left, right = re.split('[:|-]', r)
if chr_name[3:] == 'X':
region_global_series['chr_idx'] = 24
else:
region_global_series['chr_idx'] = int(chr_name[3:])
region_global_series['left'] = left
region_global_series['region'] = r
region_global = region_global.append(region_global_series, ignore_index=True)
sorted_regions = region_global.sort_values(by=['chr_idx', 'left']).region.values
sorted_regions
global_hits_df.loc[global_hits_df.region == 'chr8:128044869-128045269']
global_hits_df.loc[global_hits_df.region == 'chr6:135323137-135323537'].shape
overlap_num_array = []
overlap_ratio_array = []
for i in np.arange(len(sorted_regions)):
current_region = sorted_regions[i]
current_hits = global_hits_df.loc[global_hits_df.region == current_region].gene_names.values
for ii in np.arange(len(sorted_regions)):
test_region = sorted_regions[ii]
test_hits = global_hits_df.loc[global_hits_df.region == test_region].gene_names.values
overlap_num = len(set(current_hits).intersection(set(test_hits)))
overlap_ratio = overlap_num / len(current_hits)
if current_region == 'chr6:135323137-135323537':
print(sorted_regions[i], end='\t')
print(len(current_hits), end = '\t')
print(sorted_regions[ii], end='\t')
print(len(test_hits), end = '\t')
print(overlap_num)
overlap_num_array.append(overlap_num)
overlap_ratio_array.append(overlap_ratio)
np.sum(np.array(overlap_num_array).reshape(35,35) > 0)
(143-35) / 2
num = 34
summary = 0
while num > 0:
summary += num
num -= 1
summary
from scipy.stats import hypergeom
hypergeom.sf(54, 133652, 68, 595)
import matplotlib
from matplotlib import pyplot as plt
from matplotlib import cm as CM
%matplotlib inline
matplotlib.rcParams['pdf.fonttype'] = 42
logt_matrix = np.log2((np.array(overlap_num_array) + 1)).reshape(35,35)
fig, ax = plt.subplots(figsize=(12,10))
cmap = CM.get_cmap('YlGnBu', 50)
count_heatmap = ax.imshow(logt_matrix, cmap=cmap, vmin=0, vmax=8)
ax.set_xticks(np.arange(len(sorted_regions)))
ax.set_yticks(np.arange(len(sorted_regions)))
# ... and label them with the respective list entries
ax.set_xticklabels(sorted_regions, rotation=90)
ax.set_yticklabels(sorted_regions)
cbar = fig.colorbar(count_heatmap, ax=ax)
plt.savefig('Overlapped_global_hits.pdf')
###Output
_____no_output_____
|
examples/beta/lending-club/loans-enriched/train_model.ipynb
|
###Markdown
Read historical data from Beneath
###Code
df = await beneath.easy_read("epg/lending-club/loans-history")
###Output
_____no_output_____
###Markdown
Create target variable
###Code
def make_binary(outcome):
if outcome in ['Charged Off', 'Default']:
return True
else:
return False
df['loan_status_binary'] = df['loan_status'].apply(lambda x: make_binary(x))
###Output
_____no_output_____
###Markdown
Set the target
###Code
Y = df[['loan_status_binary']]
###Output
_____no_output_____
###Markdown
Preprocess the input data Drop rows where there are nulls
###Code
df = df.loc[ df['dti'].isna() == False ]
df = df.loc[ df['revol_util'].isna() == False ]
###Output
_____no_output_____
###Markdown
Set the input features
###Code
X = df[['term', 'int_rate', 'loan_amount', 'annual_inc',
'acc_now_delinq', 'dti', 'fico_range_high', 'open_acc', 'pub_rec', 'revol_util']]
###Output
_____no_output_____
###Markdown
Train and score the model
###Code
# Split dataset into train and test
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=2020)
# train model
clf = LogisticRegression(random_state=0).fit(X_train, y_train)
# predict
y_pred = clf.predict(X_test)
# score
clf.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Save the model in a file
###Code
joblib.dump(clf, 'model.pkl')
###Output
_____no_output_____
|
Quarterly+Time+Series+of+the+Number+of+Australian+Residents.ipynb
|
###Markdown
Data Quarterly Time Series of the Number of Australian Residents https://vincentarelbundock.github.io/Rdatasets/csv/datasets/austres.csv
###Code
df=pd.read_csv("https://vincentarelbundock.github.io/Rdatasets/csv/datasets/austres.csv ")
df=df.drop('Unnamed: 0',1)
df.head()
df.tail()
len(df)
df.shape[0]
start = datetime.datetime.strptime("1971-03-31", "%Y-%m-%d")
print(start)
#!pip install arrow
start = datetime.datetime.strptime("1971-03-31", "%Y-%m-%d")
print(start)
date_list = [start + relativedelta(months=x) for x in range(0,3*df.shape[0])]
date_list
len(date_list)
c2=[]
for i in range(0,3*len(df),3):
c2.append(date_list[i])
c2
len(c2)
df['index'] =c2
df.set_index(['index'], inplace=True)
df.index.name=None
df.head()
df=df.drop('time',1)
df.tail()
df.austres.plot(figsize=(12,8), title= 'Number of Australian Residents', fontsize=14)
plt.savefig('austrailian_residents.png', bbox_inches='tight')
decomposition = seasonal_decompose(df.austres, freq=4)
fig = plt.figure()
fig = decomposition.plot()
fig.set_size_inches(15, 8)
# Define the p, d and q parameters to take any value between 0 and 2
p = d = q = range(0, 2)
print(p)
import itertools
import warnings
# Generate all different combinations of p, q and q triplets
pdq = list(itertools.product(p, d, q))
print(pdq)
# Generate all different combinations of seasonal p, q and q triplets
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
print('Examples of parameter combinations for Seasonal ARIMA...')
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[1]))
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[2]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[3]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[4]))
y=df
warnings.filterwarnings("ignore") # specify to ignore warning messages
c4=[]
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
mod = sm.tsa.statespace.SARIMAX(y,
order=param,
seasonal_order=param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
print('ARIMA{}x{}12 - AIC:{}'.format(param, param_seasonal, results.aic))
c4.append('ARIMA{}x{}12 - AIC:{}'.format(param, param_seasonal, results.aic))
except:
continue
warnings.filterwarnings("ignore") # specify to ignore warning messages
c3=[]
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
mod = sm.tsa.statespace.SARIMAX(y,
order=param,
seasonal_order=param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
c3.append( results.aic)
except:
continue
c3
len(c3)
import numpy as np
index_min = np.argmin(c3)
index_min
c4[index_min]
type(c4[index_min])
from statsmodels.tsa.x13 import x13_arima_select_order
order1=c4[index_min][6:13]
order1
type(order1)
order1=[int(s) for s in order1.split(',')]
order1
type(order1)
seasonal_order1=c4[index_min][16:27]
seasonal_order1
seasonal_order1=[int(s) for s in seasonal_order1.split(',')]
seasonal_order1
mod = sm.tsa.statespace.SARIMAX(df.austres, trend='n', order=order1, seasonal_order=seasonal_order1)
results = mod.fit()
print (results.summary())
results.predict(start=78,end=99)
results.predict(start=78,end=99).plot()
###Output
_____no_output_____
###Markdown
Data Quarterly Time Series of the Number of Australian Residents https://vincentarelbundock.github.io/Rdatasets/csv/datasets/austres.csv
###Code
df=pd.read_csv("https://vincentarelbundock.github.io/Rdatasets/csv/datasets/austres.csv ")
df=df.drop('Unnamed: 0',1)
df.head()
df.tail()
len(df)
df.shape[0]
start = datetime.datetime.strptime("1971-03-31", "%Y-%m-%d")
print(start)
#!pip install arrow
start = datetime.datetime.strptime("1971-03-31", "%Y-%m-%d")
print(start)
date_list = [start + relativedelta(months=x) for x in range(0,3*df.shape[0])]
date_list
len(date_list)
c2=[]
for i in range(0,3*len(df),3):
c2.append(date_list[i])
c2
len(c2)
df['index'] =c2
df.set_index(['index'], inplace=True)
df.index.name=None
df.head()
df=df.drop('time',1)
df.tail()
df.austres.plot(figsize=(12,8), title= 'Number of Australian Residents', fontsize=14)
plt.savefig('austrailian_residents.png', bbox_inches='tight')
decomposition = seasonal_decompose(df.austres, freq=4)
fig = plt.figure()
fig = decomposition.plot()
fig.set_size_inches(15, 8)
# Define the p, d and q parameters to take any value between 0 and 2
p = d = q = range(0, 2)
print(p)
import itertools
import warnings
# Generate all different combinations of p, q and q triplets
pdq = list(itertools.product(p, d, q))
print(pdq)
# Generate all different combinations of seasonal p, q and q triplets
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
print('Examples of parameter combinations for Seasonal ARIMA...')
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[1]))
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[2]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[3]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[4]))
y=df
warnings.filterwarnings("ignore") # specify to ignore warning messages
c4=[]
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
mod = sm.tsa.statespace.SARIMAX(y,
order=param,
seasonal_order=param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
print('ARIMA{}x{}12 - AIC:{}'.format(param, param_seasonal, results.aic))
c4.append('ARIMA{}x{}12 - AIC:{}'.format(param, param_seasonal, results.aic))
except:
continue
warnings.filterwarnings("ignore") # specify to ignore warning messages
c3=[]
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
mod = sm.tsa.statespace.SARIMAX(y,
order=param,
seasonal_order=param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
c3.append( results.aic)
except:
continue
c3
len(c3)
import numpy as np
index_min = np.argmin(c3)
index_min
c4[index_min]
type(c4[index_min])
from statsmodels.tsa.x13 import x13_arima_select_order
order1=c4[index_min][6:13]
order1
type(order1)
order1=[int(s) for s in order1.split(',')]
order1
type(order1)
seasonal_order1=c4[index_min][16:27]
seasonal_order1
seasonal_order1=[int(s) for s in seasonal_order1.split(',')]
seasonal_order1
mod = sm.tsa.statespace.SARIMAX(df.austres, trend='n', order=order1, seasonal_order=seasonal_order1)
results = mod.fit()
print (results.summary())
results.predict(start=78,end=99)
results.predict(start=78,end=99).plot()
###Output
_____no_output_____
|
nb/Manejo de excepciones.ipynb
|
###Markdown
Manejo de excepciones Las [excepciones](https://docs.python.org/3/reference/executionmodel.htmlexceptions) son un mecanismo que provee Python para manejar errores o situaciones inesperadas que se producen durante la ejecución de los programas. Mediante este mecanismo, el curso de ejecución del programa se interrumpe cuando ocurre un error y una excepción es "levantada" (_raised_). El control pasa entonces a otro bloque de instrucciones, el cual se encarga de manejar el error. Las sentencias try y except Las sentencias [try](https://docs.python.org/3/reference/compound_stmts.htmltry) y [except](https://docs.python.org/3/reference/compound_stmts.htmlexcept) son las que implementan el manejo de excepciones en Python. En el bloque **_try_** se coloca el código que puede ocasionar que se levante la excepción y en el bloque **_except_** se ubica el código que maneja la excepción.La estructura básica es la siguiente:```pythontry: except: ``` Por ejemplo, un llamado a la función **_float()_** puede ocasionar un error si la hilera de entrada no corresponde a un número entero.
###Code
x = float("8,5")
try:
x = float("8,5")
except:
print("Por favor utilice un número")
###Output
_____no_output_____
###Markdown
El siguiente ejemplo maneja el mismo error, que puede generarse por una entrada errónea por parte del usuario.
###Code
fahr_hilera = input('Ingrese la temperatura en grados Fahrenheit: ')
try:
fahr = float(fahr_hilera)
celsius = (fahr - 32.0) * 5.0 / 9.0
print("El equivalente el grados Celsius es: ", celsius)
except:
print('Por favor ingrese un número.')
###Output
_____no_output_____
|
Model backlog/Train/100-jigsaw-fold2-xlm-roberta-large-best.ipynb
|
###Markdown
Dependencies
###Code
import json, warnings, shutil, glob
from jigsaw_utility_scripts import *
from scripts_step_lr_schedulers import *
from transformers import TFXLMRobertaModel, XLMRobertaConfig
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
TPU configuration
###Code
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
###Output
Running on TPU grpc://10.0.0.2:8470
REPLICAS: 8
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/jigsaw-data-split-roberta-192-ratio-2-upper/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
valid_df = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/validation.csv",
usecols=['comment_text', 'toxic', 'lang'])
print('Train samples: %d' % len(k_fold))
display(k_fold.head())
print('Validation samples: %d' % len(valid_df))
display(valid_df.head())
base_data_path = 'fold_2/'
fold_n = 2
# Unzip files
!tar -xvf /kaggle/input/jigsaw-data-split-roberta-192-ratio-2-upper/fold_2.tar.gz
###Output
Train samples: 400830
###Markdown
Model parameters
###Code
base_path = '/kaggle/input/jigsaw-transformers/XLM-RoBERTa/'
config = {
"MAX_LEN": 192,
"BATCH_SIZE": 128,
"EPOCHS": 4,
"LEARNING_RATE": 1e-5,
"ES_PATIENCE": None,
"base_model_path": base_path + 'tf-xlm-roberta-large-tf_model.h5',
"config_path": base_path + 'xlm-roberta-large-config.json'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
###Output
_____no_output_____
###Markdown
Learning rate schedule
###Code
lr_min = 1e-7
lr_start = 1e-7
lr_max = config['LEARNING_RATE']
step_size = len(k_fold[k_fold[f'fold_{fold_n}'] == 'train']) // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * step_size
hold_max_steps = 0
warmup_steps = step_size * 1
decay = .9997
rng = [i for i in range(0, total_steps, config['BATCH_SIZE'])]
y = [exponential_schedule_with_warmup(tf.cast(x, tf.float32), warmup_steps, hold_max_steps,
lr_start, lr_max, lr_min, decay) for x in rng]
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
###Output
Learning rate schedule: 1e-07 to 9.84e-06 to 1.06e-06
###Markdown
Model
###Code
module_config = XLMRobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFXLMRobertaModel.from_pretrained(config['base_model_path'], config=module_config)
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
cls_token = last_hidden_state[:, 0, :]
output = layers.Dense(1, activation='sigmoid', name='output')(cls_token)
model = Model(inputs=[input_ids, attention_mask], outputs=output)
return model
###Output
_____no_output_____
###Markdown
Train
###Code
# Load data
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train_int.npy').reshape(x_train.shape[1], 1).astype(np.float32)
x_valid_ml = np.load(database_base_path + 'x_valid.npy')
y_valid_ml = np.load(database_base_path + 'y_valid.npy').reshape(x_valid_ml.shape[1], 1).astype(np.float32)
#################### ADD TAIL ####################
x_train = np.hstack([x_train, np.load(base_data_path + 'x_train_tail.npy')])
y_train = np.vstack([y_train, y_train])
step_size = x_train.shape[1] // config['BATCH_SIZE']
valid_step_size = x_valid_ml.shape[1] // config['BATCH_SIZE']
# Build TF datasets
train_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO, seed=SEED))
valid_dist_ds = strategy.experimental_distribute_dataset(get_validation_dataset(x_valid_ml, y_valid_ml, config['BATCH_SIZE'], AUTO, repeated=True, seed=SEED))
train_data_iter = iter(train_dist_ds)
valid_data_iter = iter(valid_dist_ds)
# Step functions
@tf.function
def train_step(data_iter):
def train_step_fn(x, y):
with tf.GradientTape() as tape:
probabilities = model(x, training=True)
loss = loss_fn(y, probabilities)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_auc.update_state(y, probabilities)
train_loss.update_state(loss)
for _ in tf.range(step_size):
strategy.experimental_run_v2(train_step_fn, next(data_iter))
@tf.function
def valid_step(data_iter):
def valid_step_fn(x, y):
probabilities = model(x, training=False)
loss = loss_fn(y, probabilities)
valid_auc.update_state(y, probabilities)
valid_loss.update_state(loss)
for _ in tf.range(valid_step_size):
strategy.experimental_run_v2(valid_step_fn, next(data_iter))
# Train model
with strategy.scope():
model = model_fn(config['MAX_LEN'])
optimizer = optimizers.Adam(learning_rate=lambda:
exponential_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32),
warmup_steps, hold_max_steps, lr_start,
lr_max, lr_min, decay))
loss_fn = losses.binary_crossentropy
train_auc = metrics.AUC()
valid_auc = metrics.AUC()
train_loss = metrics.Sum()
valid_loss = metrics.Sum()
metrics_dict = {'loss': train_loss, 'auc': train_auc,
'val_loss': valid_loss, 'val_auc': valid_auc}
history = custom_fit(model, metrics_dict, train_step, valid_step, train_data_iter, valid_data_iter,
step_size, valid_step_size, config['BATCH_SIZE'], config['EPOCHS'],
config['ES_PATIENCE'], save_last=False)
# model.save_weights('model.h5')
# Make predictions
x_train = np.load(base_data_path + 'x_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
x_valid_ml_eval = np.load(database_base_path + 'x_valid.npy')
train_preds = model.predict(get_test_dataset(x_train, config['BATCH_SIZE'], AUTO))
valid_preds = model.predict(get_test_dataset(x_valid, config['BATCH_SIZE'], AUTO))
valid_ml_preds = model.predict(get_test_dataset(x_valid_ml_eval, config['BATCH_SIZE'], AUTO))
k_fold.loc[k_fold[f'fold_{fold_n}'] == 'train', f'pred_{fold_n}'] = np.round(train_preds)
k_fold.loc[k_fold[f'fold_{fold_n}'] == 'validation', f'pred_{fold_n}'] = np.round(valid_preds)
valid_df[f'pred_{fold_n}'] = valid_ml_preds
# Fine-tune on validation set
#################### ADD TAIL ####################
x_valid_ml_tail = np.hstack([x_valid_ml, np.load(database_base_path + 'x_valid_tail.npy')])
y_valid_ml_tail = np.vstack([y_valid_ml, y_valid_ml])
valid_step_size_tail = x_valid_ml_tail.shape[1] // config['BATCH_SIZE']
# Build TF datasets
train_ml_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset(x_valid_ml_tail, y_valid_ml_tail,
config['BATCH_SIZE'], AUTO, seed=SEED))
train_ml_data_iter = iter(train_ml_dist_ds)
history_ml = custom_fit(model, metrics_dict, train_step, valid_step, train_ml_data_iter, valid_data_iter,
valid_step_size_tail, valid_step_size, config['BATCH_SIZE'], 1,
config['ES_PATIENCE'], save_last=False)
# Join history
for key in history_ml.keys():
history[key] += history_ml[key]
model.save_weights('model_ml.h5')
# Make predictions
valid_ml_preds = model.predict(get_test_dataset(x_valid_ml_eval, config['BATCH_SIZE'], AUTO))
valid_df[f'pred_ml_{fold_n}'] = valid_ml_preds
### Delete data dir
shutil.rmtree(base_data_path)
###Output
Train for 5010 steps, validate for 62 steps
EPOCH 1/4
time: 1714.5s loss: 0.2420 auc: 0.9595 val_loss: 0.2572 val_auc: 0.9274
EPOCH 2/4
time: 1520.0s loss: 0.1596 auc: 0.9821 val_loss: 0.2858 val_auc: 0.9170
EPOCH 3/4
time: 1519.9s loss: 0.1408 auc: 0.9859 val_loss: 0.3013 val_auc: 0.9111
EPOCH 4/4
time: 1519.9s loss: 0.1360 auc: 0.9868 val_loss: 0.3064 val_auc: 0.9107
Training finished
Train for 125 steps, validate for 62 steps
EPOCH 1/1
time: 1630.1s loss: 7.3059 auc: 0.9563 val_loss: 0.1266 val_auc: 0.9807
Training finished
###Markdown
Model loss graph
###Code
plot_metrics(history)
###Output
_____no_output_____
###Markdown
Model evaluation
###Code
display(evaluate_model_single_fold(k_fold, fold_n, label_col='toxic_int').style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Confusion matrix
###Code
train_set = k_fold[k_fold[f'fold_{fold_n}'] == 'train']
validation_set = k_fold[k_fold[f'fold_{fold_n}'] == 'validation']
plot_confusion_matrix(train_set['toxic_int'], train_set[f'pred_{fold_n}'],
validation_set['toxic_int'], validation_set[f'pred_{fold_n}'])
###Output
_____no_output_____
###Markdown
Model evaluation by language
###Code
display(evaluate_model_single_fold_lang(valid_df, fold_n).style.applymap(color_map))
# ML fine-tunned preds
display(evaluate_model_single_fold_lang(valid_df, fold_n, pred_col='pred_ml').style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Visualize predictions
###Code
pd.set_option('max_colwidth', 120)
print('English validation set')
display(k_fold[['comment_text', 'toxic'] + [c for c in k_fold.columns if c.startswith('pred')]].head(10))
print('Multilingual validation set')
display(valid_df[['comment_text', 'toxic'] + [c for c in valid_df.columns if c.startswith('pred')]].head(10))
###Output
English validation set
###Markdown
Test set predictions
###Code
x_test = np.load(database_base_path + 'x_test.npy')
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE'], AUTO))
submission = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/sample_submission.csv')
submission['toxic'] = test_preds
submission.to_csv('submission.csv', index=False)
display(submission.describe())
display(submission.head(10))
###Output
_____no_output_____
|
docs/source/examples/drone-point-cloud.ipynb
|
###Markdown
Processing drone-based data
###Code
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from pymccrgb import mcc, mcc_rgb, load_laz, write_laz
from pymccrgb.plotting import plot_points_3d, plot_results
###Output
_____no_output_____
###Markdown
Load a point cloud
###Code
data = load_laz('data/points_drone.laz')
###Output
_____no_output_____
###Markdown
Try MCC
###Code
ground, labels = mcc(data)
###Output
_____no_output_____
###Markdown
MCC-RGB with one classification step
###Code
training_heights = []
ground, labels = mcc_rgb(data, training_tols=training_heights)
###Output
_____no_output_____
###Markdown
MCC-RGB with multiple classes
###Code
training_heights = []
ground, labels = mcc_rgb(data, training_tols=training_heights)
###Output
_____no_output_____
###Markdown
Saving a classified point cloud
###Code
write_laz('classified_drone.laz')
###Output
_____no_output_____
|
uc_insurance.ipynb
|
###Markdown
Where is the University of California Getting it's Insurance? This notebook analyses UC insurance contracts obtained by a freedom of information act inquiry.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
rawdata_filename = 'CPRA #20-3063 9.23.2020.xlsx'
# Load all sheets
rawdata_excel = pd.read_excel(rawdata_filename, sheet_name=None)
# Get sheet names
sheet_names = list(rawdata_excel.keys())
sheet_names
#Define output folders
png_folder = 'png_figures/'
pdf_folder = 'pdf_figures/'
###Output
_____no_output_____
###Markdown
Let's look at the UC Regents Policies first
###Code
#Clean up the header and label the renewal date column
regents_policies = pd.read_excel(rawdata_filename, sheet_name=sheet_names[0], header=2)
#There are trailing spaces in some of the column names in the raw datafile
#Rename the columns
regents_policies.rename(columns={'Policy ': 'Policy',
'Carrier ': 'Carrier',
'Premium ': 'Premium',
'Unnamed: 3': "Renewal"}, inplace=True)
regents_policies
#Clean up data types
regents_policies = regents_policies.convert_dtypes()
regents_policies
regents_policies.iloc[30:40, 3]
regents_policies[regents_policies['Renewal'].str.contains('renew')]
#Pull out dates
def get_renewal_date(string):
return string.split(' ')[1]
###Output
_____no_output_____
###Markdown
There is a space after the '*' in row 29, row 33 of the original datafile.
###Code
# There is a space after the '*' in row 29, row 33 of the original datafile.
regents_policies.loc[29, 'Renewal'] = regents_policies.loc[29, 'Renewal'].replace('* ', '*')
#Pull out dates
regents_policies[regents_policies['Renewal'].str.contains('renew')]['Renewal'].apply(lambda x: get_renewal_date(x))
# Make a new column for renewal dates
regents_policies['Renewal Date'] = '<NA>'
# Pull out the actual renewal dates
regents_policies.loc[29:43, 'Renewal Date'] = regents_policies[regents_policies['Renewal'].str.contains('renew')]['Renewal'].apply(lambda x: get_renewal_date(x))
#Convert to datetime. - THIS IS NOT WORKING
regents_policies['Renewal Date'] = pd.to_datetime(regents_policies['Renewal Date'], errors='ignore')
regents_policies.dtypes
#Remove leading or trailing spaces
regents_policies['Carrier'] = regents_policies['Carrier'].str.strip()
#Drop useless column
regents_policies = regents_policies.drop(columns=['Renewal'])
#So which companies are we paying and how much?
df = regents_policies
df = df.sort_values('Premium', ascending=False)
df.plot.bar('Carrier', 'Premium', figsize=(12, 5), color='#72CDF4');
plt.title("Where are the UC's premiums? - 2020")
plt.legend().remove();
plt.xlabel(None)
plt.ylabel('United States Dollars');
#Give dollars with zeros and commas
plt.gca().get_yaxis().set_major_formatter(
ticker.FuncFormatter(lambda y, p: format(int(y), ',')))
plt.ylim(0, 5000000);
plt.savefig(pdf_folder+"Where are the UC's premiums -2020.pdf", bbox_inches='tight')
plt.savefig(png_folder+"Where are the UC's premiums -2020.png", bbox_inches='tight')
# What are the biggest premiums covering?
df.head(5)
###Output
_____no_output_____
###Markdown
"Projects with a projected construction value of $25 million and over (total for all phases) are to be insured under the University Controlled Insurance Program, or “UCIP.” The UCIP is a single insurance program that insures the University of California, Enrolled Contractors, Enrolled Subcontractors, and other designated parties (“Contractors”) for Work performed at the Project Site."https://www.ucop.edu/construction-services/programs-and-processes/university-controlled-insurance-program/ucip.html
###Code
#When are the next big renewals?
#The only renewal dates left are November 1st
#Chubb is the biggest vendor.
df = regents_policies
df = df[df['Renewal Date']!='<NA>']
#Sort by date - This is not Working
df = df.sort_values('Renewal Date', ascending=False).reset_index()
df.plot.bar('Renewal Date', 'Premium');
#What if you group the premiums?
df = regents_policies
df = df.groupby('Carrier').sum()
df = df.sort_values('Premium', ascending=False).reset_index()
df.plot.bar('Carrier', 'Premium', figsize=(10,5), color='#005581');
plt.title('Who are the Regents paying and how much? - 2020')
plt.legend().remove();
plt.xlabel(None)
plt.ylabel('United States Dollars');
#Give dollars with zeros and commas
plt.gca().get_yaxis().set_major_formatter(
ticker.FuncFormatter(lambda y, p: format(int(y), ',')))
plt.ylim(0, 5000000);
plt.savefig(pdf_folder+"Where are the Regents paying and how much -2020.pdf", bbox_inches='tight');
plt.savefig(png_folder+"Where are the Regents paying and how much -2020.png", bbox_inches='tight');
## What about the other sheets?
#Clean up the header and label the renewal date column
fiatlux_policies = pd.read_excel(rawdata_filename, sheet_name=sheet_names[1], header=2)
#Rename the columns
fiatlux_policies.rename(columns={'Policy ': 'Policy',
'Carrier ': 'Carrier',
'Premium ': 'Premium',
'Unnamed: 3': "Renewal"}, inplace=True)
#Remove leading or trailing spaces
fiatlux_policies['Carrier'] = fiatlux_policies['Carrier'].str.strip()
fiatlux_policies
df = fiatlux_policies
df = df.sort_values('Premium', ascending=False)
df.plot.bar('Carrier', 'Premium', figsize=(12, 5), color='#FFE552');
plt.title("Where are FiatLux's premiums? - 2020")
plt.legend().remove();
plt.xlabel(None)
plt.ylabel('United States Dollars');
#Give dollars with zeros and commas
plt.gca().get_yaxis().set_major_formatter(
ticker.FuncFormatter(lambda y, p: format(int(y), ',')))
plt.ylim(0, 6000000);
plt.savefig(pdf_folder+"Where are FiatLux's premiums -2020.pdf");
plt.savefig(png_folder+"Where are FiatLux's premiums -2020.png");
#What if you group the premiums?
df = fiatlux_policies
df = df.groupby('Carrier').sum()
df = df.sort_values('Premium', ascending=False).reset_index()
df.plot.bar('Carrier', 'Premium', figsize=(10,5), color='#FFD200');
plt.title('Who are FiatLux paying and how much? - 2020')
plt.legend().remove();
plt.xlabel(None)
plt.ylabel('United States Dollars');
#Give dollars with zeros and commas
plt.gca().get_yaxis().set_major_formatter(
ticker.FuncFormatter(lambda y, p: format(int(y), ',')))
#plt.ylim(0, 5000000);
plt.savefig(pdf_folder+"Who are FiatLux paying and how much -2020.pdf", bbox_inches='tight');
plt.savefig(png_folder+"Who are FiatLux paying and how much -2020.png", bbox_inches='tight');
# Can these companies be grouped more coursely?
uc_construction = pd.read_excel(rawdata_filename, sheet_name=sheet_names[2], header=2)
#Rename the columns
uc_construction.rename(columns={'COVERAGE TYPE': 'Policy',
'INSURANCE COMPANY': 'Carrier',
'PREMIUM': 'Premium'}, inplace=True)
#Remove leading or trailing spaces
uc_construction['Carrier'] = uc_construction['Carrier'].str.strip()
uc_construction
df = uc_construction
df = df.sort_values('Premium', ascending=False)
df.plot.bar('Carrier', 'Premium', figsize=(12, 5), color='#BEB6AF');
plt.title("How is UC insuring construction? - 2020")
plt.legend().remove();
plt.xlabel(None)
plt.ylabel('United States Dollars');
#Give dollars with zeros and commas
plt.gca().get_yaxis().set_major_formatter(
ticker.FuncFormatter(lambda y, p: format(int(y), ',')))
plt.ylim(0, 4000000);
plt.savefig(pdf_folder+"How is UC insuring construction - 2020.pdf", bbox_inches='tight');
plt.savefig(png_folder+"How is UC insuring construction - 2020.png", bbox_inches='tight');
#What if you group the premiums?
df = uc_construction
df = df.groupby('Carrier').sum()
df = df.sort_values('Premium', ascending=False).reset_index()
df.plot.bar('Carrier', 'Premium', figsize=(10,5), color='#8F8884');
plt.title('Who are UC paying to cover construction and how much? - 2020')
plt.legend().remove();
plt.xlabel(None)
plt.ylabel('United States Dollars');
#Give dollars with zeros and commas
plt.gca().get_yaxis().set_major_formatter(
ticker.FuncFormatter(lambda y, p: format(int(y), ',')))
#plt.ylim(0, 5000000);
plt.savefig(pdf_folder+"Who are UC paying to cover construction and how much - 2020.pdf", bbox_inches='tight');
plt.savefig(png_folder+"Who are UC paying to cover construction and how much - 2020.png", bbox_inches='tight');
#How much are we paying Liberty Mutual in total?
combined = pd.concat([regents_policies, fiatlux_policies, uc_construction])
# Remove unnecessary columns
combined = combined.drop(columns = ['Renewal Date', 'Renewal'])
totals = combined.groupby(by='Carrier').sum().sort_values('Premium', ascending=False).reset_index()
totals
list(totals.Carrier)
df = totals
df.plot.bar('Carrier', 'Premium', figsize=(15,5), color='#B4975A');
plt.title('Who are UC paying and how much? - 2020')
plt.legend().remove();
plt.xlabel(None)
plt.ylabel('United States Dollars');
#Give dollars with zeros and commas
plt.gca().get_yaxis().set_major_formatter(
ticker.FuncFormatter(lambda y, p: format(int(y), ',')))
plt.ylim(0, 9000000);
plt.savefig(pdf_folder+'grandtotals.pdf', bbox_inches='tight')
plt.savefig(png_folder+'grandtotals.png', bbox_inches='tight')
##Save the totals
totals.to_pickle('C:/Users/David Brown/Documents/PythonScripts_New/Climate Crisis/uc_insurers_2020/uc_insurance_totals.pkl')
###Output
_____no_output_____
###Markdown
UC paid Liberty Mutual over 8,000,000 dollars in 2020."Liberty Mutual is a top global insurer of coal, oil, and gas. It also invests more than \\$8.9 billion in fossil fuel companies and utilities, including $1.5 billion in thermal coal. Yet, while fueling the climate crisis, Liberty Mutual is withdrawing coverage from and jacking up the costs of insurance for longtime customers in areas at risk of climate change impacts, like wildfire-affected counties in California."https://www.ran.org/press-releases/insure-our-future-campaign-targets-liberty-mutual/
###Code
brokers = pd.read_excel(rawdata_filename, sheet_name=sheet_names[3])
#Rename the columns
brokers.rename(columns={'Unnamed: 2': 'Renewal'}, inplace=True)
brokers
brokers.Broker = brokers.Broker.str.strip()
brokers['Label'] = brokers.Broker
figures = list(round(brokers['Annual Fee']/1000000, 2))
for n, fig in enumerate(figures):
brokers.loc[n, 'Label'] = brokers.loc[n, 'Broker'] + ' (' +str(fig) +')'
brokers
#The largest broker fee goes to Marsh who also continue to support fossil fuel projects
brokers.plot.pie(x = 'Broker', y='Annual Fee', labels=brokers.Label, colors=['#00A3AD', '#DBD5CD', '#B4975A']);
plt.legend().remove();
plt.ylabel(None);
#WTW is Willis Towers Watson (insurance broker for the titanic)
plt.savefig(pdf_folder+"Spending by Broker - 2020.pdf", dpi=300, bbox_inches='tight')
plt.savefig(png_folder+"Spending by Broker - 2020.png", dpi=300, bbox_inches='tight')
###Output
_____no_output_____
|
EM_nonsparse.ipynb
|
###Markdown
functions
###Code
def intialize(y):
x_start = np.zeros(M)
for k, o in enumerate(y):
aver = o/sum(A[k])
x_start += A[k]*aver
# print(k, o, aver)
x_start = x_start/N
return x_start
def em(A, y, x_old):
z = np.zeros((N, M))
for i in range(N):
z[i] = y[i]*A[i]*x_old/sum(A[i]*x_old)
x_new = np.zeros(M)
for j in range(M):
x_new[j] = sum(z[:, j])/(sum(A[:, j]))
return x_new
def mle_em(max_iter, A, y, x_true=x_flat):
x_old = intialize(y)
mse = []
for i in range(max_iter):
x_new = em(A, y, x_old)
mse = np.linalg.norm(x_new-x_true)
diff = np.linalg.norm(x_new-x_old)
if i%1 == 0:
print(f'step: {i}, diff: {diff}, mse: {mse}')
if diff < 1e-5:
return x_new, diff, mse, i
x_old = x_new
return x_new, diff, mse, max_iter
###Output
_____no_output_____
|
.ipynb_checkpoints/Preliminary Sentiment Analysis-checkpoint.ipynb
|
###Markdown
Table of Contents1. [Data Collection](Data-Collection) - [Import Book Text and Python Libraries](Import-Book-Text-and-Python-Libraries) - [Filter out HTML from Text](Filter-out-HTML-from-Text) - [Confirm Correct Chapter Import](Confirm-Correct-Chapter-Import)2. [Sentiment Analysis](Sentiment-Analysis) - [Collect Sentiment Scores](Collect-Negative,-Positive,-Neutral,-and-Compound-Sentiment-Scores) - [Additional Data Cleaning](Filter-out-Nonsensical-Sentences) - [Plot Compound Sentiment Across All Chapters](Plot-Compound-Sentiment-Across-All-Chapters) - [Graph Analysis](Graph-Analysis) - [Plot a Single Chapter's Sentiment](Plot-a-Single-Chapter's-Sentiment) - [Author Comparisons](Author-Comparisons) - a. [Two Pseudonyms Comparison](Two-Pseudonyms-Comparison) - b. [Each Chapter Comparison](Each-Chapter-Comparison) Data Collection Import Book Text and Python Libraries
###Code
import sys
# !{sys.executable} -m pip install ebooklib
# !{sys.executable} -m pip install epub-conversion
# !{sys.executable} -m pip install bs4
# !{sys.executable} -m pip install html2text
# !{sys.executable} -m pip install nltk
# !{sys.executable} -m pip install tabulate
# !{sys.executable} -m pip install vaderSentiment
import ebooklib
from ebooklib import epub
#read ebook as epub object
book_name = "EitherOr A Fragment of Life by Kierkegaard Søren, Hannay Alastair.epub"
book = epub.read_epub(book_name)
#extract chapters text
chapters = []
print("Table of Contents: \n")
for item in book.get_items():
if item.get_type() == ebooklib.ITEM_DOCUMENT:
print(item)
chapters.append(item.get_content())
###Output
Table of Contents:
<EpubHtml:cover:Text/cover.html>
<EpubHtml:fm:Text/9780140445770_Either-Or_000.html>
<EpubHtml:chapter01:Text/9780140445770_Either-Or_001.html>
<EpubHtml:chapter02:Text/9780140445770_Either-Or_002.html>
<EpubHtml:chapter03:Text/9780140445770_Either-Or_003.html>
<EpubHtml:chapter04:Text/9780140445770_Either-Or_004.html>
<EpubHtml:chapter05:Text/9780140445770_Either-Or_005.html>
<EpubHtml:chapter06:Text/9780140445770_Either-Or_006.html>
<EpubHtml:chapter07:Text/9780140445770_Either-Or_007.html>
<EpubHtml:chapter08:Text/9780140445770_Either-Or_008.html>
<EpubHtml:chapter09:Text/9780140445770_Either-Or_009.html>
<EpubHtml:chapter10:Text/9780140445770_Either-Or_010.html>
<EpubHtml:chapter11:Text/9780140445770_Either-Or_011.html>
<EpubHtml:chapter12:Text/9780140445770_Either-Or_012.html>
<EpubHtml:chapter13:Text/9780140445770_Either-Or_013.html>
<EpubHtml:chapter14:Text/9780140445770_Either-Or_014.html>
<EpubHtml:chapter15:Text/9780140445770_Either-Or_000_Footnote.html>
<EpubHtml:cover.xhtml:Text/cover.xhtml>
###Markdown
Filter out HTML from Text Examine the table of contents of the book and determine which chapters to keep. Double check your indexes by displaying the first and last chapter. Here we will remove the first couple and last chapters, as the table of contents above shows they are not important, like the cover.
###Code
from bs4 import BeautifulSoup
#extract only text from HTML mess, also remove new lines
chapters_text = [BeautifulSoup(chap).text.replace('\n',' ') for chap in chapters[2:15]]
#print chapter to doublecheck
#print(chapters_text[8])
import nltk
nltk.download('punkt')
#Tokenize sentences into a list of lists
a_list = [nltk.tokenize.sent_tokenize(chap) for chap in chapters_text]
# Get the size of list of list using list comprehension & sum() function
count = sum([ len(listElem) for listElem in a_list])
print(str(count) + " sentences")
###Output
[nltk_data] Downloading package punkt to C:\Users\Paul
[nltk_data] McCabe\AppData\Roaming\nltk_data...
[nltk_data] Unzipping tokenizers\punkt.zip.
###Markdown
Confirm Correct Chapter Import Print the 1st sentence of each chapter to determine the chapter name, then confirm accuracy by examing the table of contents in the Ebook. Below we can see title 8 is unusual because the first sentence is not capitalized. After checking our Ebook, we can see that it is not a new chapter and should therefore be merged with the previous chapter.
###Code
count = 0
for chapter in a_list:
print(str(count) + 'title: ' + chapter[0] + '\n')
count += 1
###Output
0title: PART ONE CONTAINING THE PAPERS OF A Are passions, then, the pagans of the soul?
1title: 1 DIAPSALMATA ad se ipsum Grandeur, savoir, renommé, Amitié, plaisir et bien, Tout n’est que vent, que fumée: Pour mieux dire, tout n’est rien.1 WHAT is a poet?
2title: 2 THE IMMEDIATE EROTIC STAGES OR THE MUSICAL EROTIC PLATITUDINOUS INTRODUCTION From the moment my soul was first overwhelmed in wonder at Mozart’s music, and bowed down to it in humble admiration, it has often been my cherished and rewarding pastime to reflect upon how that happy Greek view that calls the world a cosmos, because it manifests itself as an orderly whole, a tasteful and transparent adornment of the spirit that works upon and in it – upon how that happy view repeats itself in a higher order of things, in the world of ideals, how it may be a ruling wisdom there too, mainly to be admired for joining together those things that belong with one another: Axel with Valborg, Homer with the Trojan War, Raphael with Catholicism, Mozart with Don Juan.
3title: 3 ANCIENT TRAGEDY’S REFLECTION IN THE MODERN An Essay in the Fragmentary Endeavour Read before Symparanekromenoi1 An Essay in the Fragmentary Endeavour IF someone said the tragic will always be the tragic, I wouldn’t object too much; every historical development takes place within the embrace of its concept.
4title: 4 SHADOWGRAPHS Psychological Entertainment Read before Symparanekromenoi Love may always breach its oath; Love’s spell in this cave does lull The drunken, startled soul Into forgetting it pledged its troth.
5title: 5 THE UNHAPPIEST ONE An Enthusiastic Address to Symparanekromenoi Peroration in the Friday Meetings SOMEWHERE in England is said to be a grave distinguished not by a splendid monument or sad surroundings, but by a small inscription: ‘The Unhappiest One’.
6title: 6 CROP ROTATION An Attempt at a Theory of Social Prudence CHREMYLOS: There is too much of everything.
7title: 7 THE SEDUCER’S DIARY Sua passion’ predominante È la giovin principiànte.
8title: One could think of several ways of surprising Cordelia.
9title: PART TWO CONTAINING THE PAPERS OF B: LETTERS TO A Les grandes passions sont solitaires, et les transporter au désert, c’ est les rendre à leur empire.
10title: 2 EQUILIBRIUM BETWEEN THE AESTHETIC AND THE ETHICAL IN THE DEVELOPMENT OF PERSONALITY My friend!
11title: 3 LAST WORD PERHAPS you have the same experience with my previous letters as I have: you have forgotten most of what was in them.
12title: 4 THE EDIFYING IN THE THOUGHT THAT AGAINST GOD WE ARE ALWAYS IN THE WRONG Prayer FATHER in Heaven!
###Markdown
Combine the two chapters and remove the duplicate. Then check if anything was deleted by counting the sum of elements in the 2d array and comparing it to before. (Previous count was 9371 sentences, looks good!)
###Code
a_list[7:8] = [a_list[7] + a_list[8]]
del a_list[8]
count = 0
for chapter in a_list:
print(str(count) + 'title: ' + chapter[0] + '\n')
count += 1
count = sum([len(listElem) for listElem in a_list])
print(str(count) + " sentences")
###Output
0title: PART ONE CONTAINING THE PAPERS OF A Are passions, then, the pagans of the soul?
1title: 1 DIAPSALMATA ad se ipsum Grandeur, savoir, renommé, Amitié, plaisir et bien, Tout n’est que vent, que fumée: Pour mieux dire, tout n’est rien.1 WHAT is a poet?
2title: 2 THE IMMEDIATE EROTIC STAGES OR THE MUSICAL EROTIC PLATITUDINOUS INTRODUCTION From the moment my soul was first overwhelmed in wonder at Mozart’s music, and bowed down to it in humble admiration, it has often been my cherished and rewarding pastime to reflect upon how that happy Greek view that calls the world a cosmos, because it manifests itself as an orderly whole, a tasteful and transparent adornment of the spirit that works upon and in it – upon how that happy view repeats itself in a higher order of things, in the world of ideals, how it may be a ruling wisdom there too, mainly to be admired for joining together those things that belong with one another: Axel with Valborg, Homer with the Trojan War, Raphael with Catholicism, Mozart with Don Juan.
3title: 3 ANCIENT TRAGEDY’S REFLECTION IN THE MODERN An Essay in the Fragmentary Endeavour Read before Symparanekromenoi1 An Essay in the Fragmentary Endeavour IF someone said the tragic will always be the tragic, I wouldn’t object too much; every historical development takes place within the embrace of its concept.
4title: 4 SHADOWGRAPHS Psychological Entertainment Read before Symparanekromenoi Love may always breach its oath; Love’s spell in this cave does lull The drunken, startled soul Into forgetting it pledged its troth.
5title: 5 THE UNHAPPIEST ONE An Enthusiastic Address to Symparanekromenoi Peroration in the Friday Meetings SOMEWHERE in England is said to be a grave distinguished not by a splendid monument or sad surroundings, but by a small inscription: ‘The Unhappiest One’.
6title: 6 CROP ROTATION An Attempt at a Theory of Social Prudence CHREMYLOS: There is too much of everything.
7title: 7 THE SEDUCER’S DIARY Sua passion’ predominante È la giovin principiànte.
8title: PART TWO CONTAINING THE PAPERS OF B: LETTERS TO A Les grandes passions sont solitaires, et les transporter au désert, c’ est les rendre à leur empire.
9title: 2 EQUILIBRIUM BETWEEN THE AESTHETIC AND THE ETHICAL IN THE DEVELOPMENT OF PERSONALITY My friend!
10title: 3 LAST WORD PERHAPS you have the same experience with my previous letters as I have: you have forgotten most of what was in them.
11title: 4 THE EDIFYING IN THE THOUGHT THAT AGAINST GOD WE ARE ALWAYS IN THE WRONG Prayer FATHER in Heaven!
9371 sentences
###Markdown
Later we will change the chapter number to these chapter names we have just found. We will continue using 0 as the first chapter/index.
###Code
#examine ebook and add chapter names
chapter_names = {'0':'Preface',
'1':'Diapsalmata',
'2':'The Immediate Erotic Stages or the Musical Erotic',
'3':'Ancient Tragedys Rreflection in the Modern',
'4':'Shadowgraphs',
'5':'Crop Rotation',
'6':'Crop Rotation',
'7':'The Seducers Diary',
'8':'The Aesthetic Validity of Marriage',
'9':'Equilibrium Between the Aesthetic and the Ethical in the Development of Personality',
'10':'Last Word',
'11':'The Edifying in the Thought that Against God We Are Always in the Wrong'}
###Output
_____no_output_____
###Markdown
Sentiment Analysis Collect Negative, Positive, Neutral, and Compound Sentiment ScoresThis analysis centers around the library VADER (Valence Aware Dictionary and sEntiment Reasoner), a well-known and highly regarded sentiment analysis tool. While normally applied to social media sentences, we will try it out on philosophy. Given a sentence, VADER will score the sentence with a fraction of how positive, negative, and neutral the sentence is. These fractions add up to 1 and can be thought of as percentages. The compound score is then calculated from the aggregate of the 3 scores, more info can be found [here](https://github.com/cjhutto/vaderSentiment) at their github page. The range of values for our compound scores is between -1 and 1, with 0 being perfectly neutral. VADER sentiment analysis is not ideal for our body of work, as it is trained on social media text, is not fine-tuned to our specific dataset, and does not factor in the context of sentences as recursive neural networks do. It is however free, easy to access, and not computationally heavy so we will use it as a preliminary analysis tool.
###Code
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyzer = SentimentIntensityAnalyzer()
###Output
_____no_output_____
###Markdown
Let's check some of our sentences and see if we find the sentiment scores reasonable. I displayed only two but feel free to test more sentences by changing the numbers passed into the sentiment_analyzer_scores function.
###Code
#test sentiment scores with random sentences
def sentiment_analyzer_scores(sentence, print_stat = False):
score = analyzer.polarity_scores(sentence)
if(print_stat == True):
print("{:-<40} {}".format(sentence, str(score)))
return(score)
#Check the sentiment of the 4th sentence is the 1st chapter (python indexes start at 0)
sentiment_analyzer_scores(a_list[0][3], print_stat = True)
print("\n")
sentiment_analyzer_scores(a_list[3][40], print_stat = True)
print("")
###Output
Your life has perhaps brought you into touch with people of whom you suspected something of the kind, yet without being able to wrest their secret from them by force or guile. {'neg': 0.054, 'neu': 0.85, 'pos': 0.096, 'compound': 0.3612}
There is no doubt that it would be most deeply comical to have some accidental individual come by the universal idea of wanting to be the saviour of the whole world. {'neg': 0.04, 'neu': 0.895, 'pos': 0.065, 'compound': 0.2047}
###Markdown
Both sentences are categorized as largely neutral and VADER analysis struggles to find the inherent positive and negative sentiment. This seems appropriate, as these sentences are complex and difficult to discern sentiment from.
###Code
import pandas as pd
#Analyze sentiment of each sentence, put into a list
sent_list = []
count = 0
for chapter in a_list:
for x in range(0, len(chapter)):
#found in later analysis, there's some weird \xa0 included
sentence = chapter[x].rstrip('\n').replace("[…]", "")
scores = sentiment_analyzer_scores(sentence)
sent_list.append([sentence,
count,
scores['neg'],
scores['neu'],
scores['pos'],
scores['compound']])
count = count + 1
#combine list of sentence sentiments and chapter names into dataframe
sent_df = pd.DataFrame(sent_list, columns=['sentence',
'chapter',
'neg',
'neu',
'pos',
'compound'])
sent_df = sent_df.reset_index()
sent_df['chapter'] = sent_df['chapter'].astype('str')
sent_df['chapter_text'] = sent_df['chapter'].map(chapter_names)
sent_df['chapter_text'] = sent_df['chapter_text'].astype('str')
sent_df.head()
###Output
_____no_output_____
###Markdown
Filter out Nonsensical Sentences Further examination of the data reveals some sentences are just numbers or short weird characters, how much of the data is composed of these types of sentences?
###Code
def sent_len(sentence):
return len(sentence.split())
sentence_len_list = list(map(sent_len, sent_df['sentence']))
import matplotlib.pyplot as plt
fig, axs = plt.subplots(1,2, sharey=True, tight_layout=True)
fig.suptitle("Histogram of Sentence Lengths", y=1.05)
axs[0].hist(sentence_len_list, range =(0,20))
axs[0].set_title(label="Range 0-20 Words")
axs[1].hist(sentence_len_list, range=(0,10), bins = 10)
plt.title("Range 0-10 Words")
###Output
_____no_output_____
###Markdown
There is no clear divide between errors such as 1 word sentences and other sentences. From a brief overlook of
###Code
def word_count_dict(word_count):
sentence_len_list = list(map(sent_len, sent_df['sentence']))
indices = [i for i, x in enumerate(sentence_len_list) if x == word_count]
print(str(word_count) + " word sentences count: "+str(len(indices)))
dicts = {}
for index in indices:
dicts[index] = sent_df['sentence'][index]
return dicts
word_count_dict(word_count=1)
###Output
1 word sentences count: 45
###Markdown
Examing our low word count sentences, it seems only a few are actually errors. There are a few in 1 word, none in 2 word, and none in 3 word sentences. Knowing the indexes, we can manually remove these sentenes. However for reproducibilities sake, we wil make a short script and only apply it to the 1 word count list.
###Code
def has_number(sentence):
return any(char.isdigit() for char in sentence)
#if sentence has a number, drop it from the sent_df dataframe
a_dict = word_count_dict(word_count=1)
for key in a_dict:
if(has_number(a_dict[key])==True):
sent_df = sent_df.drop(int(key))
print("After Filtering")
_ = word_count_dict(1)
#Double check that the number of sentences is the same - the number of error sentences
print(str(len(sent_df)) + " sentences")
#correct duplicate chapter numbers for later, merge 5 and 6 together
def merge_chapters(chap_left, chap_right):
sent_df.chapter = sent_df.chapter.astype("int").replace(chap_right, chap_left)
chap_numbers = sent_df.chapter.unique()
for i in range(chap_left, chap_numbers[-1], 1):
sent_df.chapter = sent_df.chapter.replace(i+1, i)
merge_chapters(5, 6)
###Output
_____no_output_____
###Markdown
Plot Compound Sentiment Across All Chapters Inspiration for the graphics and sentiment technique comes from Greg Rafferty's online article: [Sentiment Analysis on the Texts of Harry Potter](https://towardsdatascience.com/basic-nlp-on-the-texts-of-harry-potter-sentiment-analysis-1b474b13651d).
###Code
#Moving average function
def moving_average(interval, window_size):
window = np.ones(int(window_size))/float(window_size)
return np.convolve(interval, window, 'same')
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
#Calculate moving average for compound score, then graph it using lmplot
y = sent_df['compound']
sent_df['moving_avg_comp'] = moving_average(y, 150)
font = {'family': 'serif',
"weight": 'normal',
'size': 27,}
sns.lmplot(y='moving_avg_comp', x='index', data=sent_df,
hue='chapter_text', fit_reg=False,
height=4, aspect=3.5, legend=False, palette='Paired',
scatter_kws={"s": 10})
#Add vertical lines
chapter_places = sent_df['chapter'].diff()[lambda x: x != 0].index.tolist()
for i in range(1,len(chapter_places)-1,1):
plt.axvline(x=chapter_places[i], color="#999")
plt.text(chapter_places[i],-.4, i+1)
#Graph formatting
plt.xlim(0)
plt.text(4000,-.05, "Neutral Sentiment")
plt.legend(bbox_to_anchor=(1.05, 1), loc=4, borderaxespad=2, markerscale=2.0)
plt.plot([1,9370], [0,0], linewidth=2, color = "gray")
plt.title("Compound Sentiment Score \nfor Kierkegaard's $\it{Either/Or}$",
loc='left', pad = 60, fontdict = font)
plt.text(0, .58, "150 Sentence Moving Average", fontdict={'size': 18,
'color': '#333'})
font = {'family': 'serif',
"weight": 'normal',
'size': 16,}
plt.xlabel("Sentences (~9000) with Chapters", fontdict = font, labelpad = 20)
plt.ylabel("VADER Sentiment Score", fontdict = font)
plt.xticks([])
plt.show()
###Output
_____no_output_____
###Markdown
Graph Analysis First some background of the text is necessary Kierkegaard's *Either/Or* explores one of human's most pondered questions, how should we live? It is an interesting book to analyze because it is composed of a narrator and the correspondences between two opposite-thinking psuedonyms of Kierkegaard, named A and B. The Preface is a story of how these papers were found, in an old desk (which is somewhat of a trope of Kierkegaard's, adding to the sense of mystery and discovery). The first part of the book of seven chapters is written by author A, titled *Either* and the last four chapters are written by B in response to A titled *Or*. Keep in mind the numbers represent when that chapter starts, chapter 7 *The Seducer's Diary* is written by A. Authors A and B argue for two very different types of lifestyles. A argues for a lifestyle described by Kierkegaard as the __aesthetic__, generally living life with flexible morality and pursuing new and exciting experiences with no commitments. B on the other hand advocates for the __ethical__ life, living life with set morals and pursuing lasting happiness through long-held relationships, like marriage. Our analysis investigates this philosophical divide by examining sentiment and we can isolate each author's works by their respective chapters. * Aesthetic chapters 2-7: A quick look at this portion of the book reveals both positive and negative sentiment, chapter 3 being largely positive and chapter 4 being very much negative. While obviously the most negative chapter of the book, it is interesting to see the sharp changes in the moving average as we start chapters 3 and 4. Perhaps the topic of the chapters is naturally characterized as negative or positive, and our VADER sentiment analysis captures this topic instead of the tone. * Ethical chapters 8-11: This section of the book, written by author B, seems to be more positive overall and with an interesting degree of variability, almost cyclical at times. I think the most interesting analysis however comes with reading the text, as you can pinpoint changes in sentiment with the context of the text. While this graph gives us a nice broad view of the book, a deeper dive into each chapter and their own statistics is necessary.
###Code
#Add Psuedonyms Column for either Narrator, Author A, or Author B
chapter_author = {'Preface':'Neutral',
'Diapsalmata':'Aesthetic',
'The Immediate Erotic Stages or the Musical Erotic':'Aesthetic',
'Ancient Tragedys Rreflection in the Modern':'Aesthetic',
'Shadowgraphs':'Aesthetic',
'Crop Rotation':'Aesthetic',
'The Seducers Diary':'Aesthetic',
'The Aesthetic Validity of Marriage':'Ethical',
'Equilibrium Between the Aesthetic and the Ethical in the Development of Personality':'Ethical',
'Last Word':'Ethical',
'The Edifying in the Thought that Against God We Are Always in the Wrong':'Ethical'}
sent_df['chapter_author'] = sent_df['chapter_text'].map(chapter_author)
###Output
_____no_output_____
###Markdown
Plot a Single Chapter's Sentiment
###Code
def plot_chapter(chap_num):
name = "Chapter: " + str(chapter_names[str(chap_num)]) + "\n Moving Average of Compound Score"
data = sent_df[sent_df['chapter'] == chap_num]
width = len(data)
start = data.index[0]
data.plot(kind="line", x='index', y='moving_avg_comp')
plt.plot([start, start+width], [0,0], linewidth=2, color="gray")
plt.title(name)
plt.ylabel("Vader Sentiment Score")
plt.xlabel("Sentences in Chronological Order")
plt.xticks()
plt.show()
plot_chapter(chap_num = 5)
###Output
_____no_output_____
###Markdown
Here we would like to examine the text to see if there are interesting points of context. To examine interesting points of our overall book graph, change the number in the fuction __plot_chapter =__ in the last line of the box above to the chapter number you wish to examine close-up. Then change the variable *sentence_place* below to the number you would estimate to be on the horizontal axis. For example, I would like to find where in the text the sentiment seems to change in chapter 5. Using the graph of chapter 5 above and the lowest point on the graph to be around 2830 sentence into the book, I can print the sentence at that location.
###Code
sentence_place = 2830
one_sentence = str(sent_df[sent_df.index == sentence_place].sentence)
print(one_sentence)
###Output
2830 For one blow can either deprive him of hope, s...
Name: sentence, dtype: object
###Markdown
With this sentence, I can then open the eBook, search for this sentence, and then investigate why the sentiment seems to change dramatically at this part of the chapter. Author ComparisonsCompare basic statistics between the two authors, keeping in mind the volume of sentences is different for each author. Two Pseudonyms Comparison
###Code
from tabulate import tabulate
#Choose what column you would like descriptive statistics of, pos, neg, neu, or compound
def descriptive_statistics(metric):
metric = str(metric)
sent_aes = sent_df[sent_df['chapter_author'] == 'Aesthetic']
sent_eth = sent_df[sent_df['chapter_author'] == 'Ethical']
aes_desc = sent_aes[metric].describe()
eth_desc = sent_eth[metric].describe()
df = pd.DataFrame({"aesthetic": aes_desc,
"ethical": eth_desc})
print("Descriptive statistics for " + metric + " column")
pd.options.display.float_format = '{:,.3f}'.format
display(df)
#
#print(tabulate(df, headers=["", "aesthetic", "ethical"],
# tablefmt='github', floatfmt=".4f"))
descriptive_statistics(metric = "compound")
import seaborn as sns
import matplotlib.pyplot as plt
from scipy.stats import iqr
import statistics as stat
import matplotlib.ticker as mtick
def basic_boxplot(metric):
metric = str(metric)
sns.boxplot(x='chapter_author', y=metric,
data=sent_df[sent_df.chapter_author != "Neutral"], palette="Paired")
plt.xlabel("chapter author", fontdict = font, labelpad = 20)
plt.ylabel(metric + " scores", fontdict = font)
plt.suptitle("Boxplot of the Two Author's " + metric + " Scores", fontdict = font)
basic_boxplot(metric='compound')
###Output
_____no_output_____
###Markdown
Each Chapter Comparison
###Code
def desc_stat_chapters(metric):
#filter out preface, get list of chapters to describe
metric = str(metric)
df_chap = sent_df[sent_df.chapter_author != "Neutral"]
chapters = df_chap['chapter_text'].unique()
#Get descriptive statistics for each chapter
dicts = {}
key_dict = {}
i = 2
for chap in chapters:
df = df_chap[df_chap['chapter_text'] == chap]
desc = df[metric].describe()
dicts['chapter ' + str(i)] = desc
key_dict['chapter ' + str(i) + ":"] = chap
i += 1
#Display the stats and chapter key
df = pd.DataFrame(dicts)
pd.options.display.float_format = '{:,.3f}'.format
print("{" + "\n".join("{!r}: {!r},".format(k, v) for k, v in key_dict.items()) + "}")
print("\n")
print("Descriptive statistics for " + metric + " column:")
#print(tabulate(df, headers=["", list(key_dict.keys())], tablefmt='github', floatfmt=".2f"))
display(df)
return(key_dict)
key_dict = desc_stat_chapters(metric = "compound")
#create shortened chapter name with number
inv_map = {v: k for k, v in key_dict.items()}
sent_df['chapter_text_num'] = sent_df['chapter_text'].map(inv_map)
%matplotlib inline
#plot each chapter's boxplot with color key for the two psuedonyms
def chapters_boxplot(metric):
#basic plot with specified metric
metric = str(metric)
plt.figure(figsize=(16, 8))
b = sns.boxplot(x='chapter_text_num', y=metric, dodge=False,
data=sent_df[sent_df.chapter_author != "Neutral"],
palette="Paired", hue='chapter_author', width=.7)
#formatting
font = {'family': 'serif',
"weight": 'normal',
'size': 27,}
b.set_xlabel("chapters in chronological order", fontdict = font, labelpad = 20)
b.set_ylabel(metric + " scores", fontdict = font)
b.legend(loc = 10, bbox_to_anchor=(.65,0.3), prop={'size': 20})
font = {'family': 'serif', "weight": 'normal', 'size': 30}
b.axes.set_title("Boxplot of the Two Author's " + metric + " Scores",
fontdict=font, pad=20)
b.tick_params(axis="x", labelsize=14)
b.tick_params(axis="y", labelsize=16)
plt.show()
chapters_boxplot(metric='compound')
#print a key to help with x-axis
print("{" + "\n".join("{!r}: {!r},".format(k, v) for k, v in key_dict.items()) + "}")
###Output
_____no_output_____
###Markdown
Graphical Analysis(Insert intelligent comments here) Save Dataframe to csv File
###Code
sent_df.to_csv("sentiment_dataframe.csv", index=False)
###Output
_____no_output_____
|
machine_learning/lesson 1 - linear regression/examples/simple_linear_regression.ipynb
|
###Markdown
RegressionPerhaps the most natural machine learning task to wrap our heads around is *regression*--a set of methods for modeling the relationship between one or more independent variables (i.e., $x$) and a dependent variable (i.e., $y$). Regression problems pop up whenever we want to output a *numeric* value. Most applications of regression fall into one of the following two broad categories:- *inference* - to explain the relationship between the inputs and outputs (most common).- *prediction* - to predict numeric outputs given inputs (most common in machine learning). A few everyday examples of regression include predicting prices (of homes, stocks, etc.), predicting length of stay (for patients in the hospital), and demand forecasting (for retail sales). Linear Regression*Linear regression* is probably the simplest and most popular regression method. It is called "linear" regression because we **assume** that the relationship between the independent variables $x$ (called *features*) and the dependent variable $y$ (called *labels*) is linear--that is, $y$ can be expressed as a *weighted sum* of the elements in $x$, plus some *noise* in the data. In mathematical terms this can be expressed as: $$y = wx + b$$where $w$ represents the learnable *weights* and $b$ the *bias* (i.e., you may recognize it as the *intercept*). The weights determine the influence of each feature on the prediction and the bias tells us what the predicted value would be if all the features $x$ values were 0. At a high level, the goal of linear regression is to find the weight estimates and the bias term $b$ that *minimize* the error between the predictions ($\hat{y}$, $\hat{}$ pronounced "hat") and the real labels ($y$).Adam's Addition:**The real labels ($y$) and predicted labels($\hat{y}$) are continuous, so it can be any real number. Real numbers are any numbers, including those with decimals.** Linear Model To make the above linear regression formula more concrete, let's translate it to an example. We have acquired a dataset on world happiness. Specifically, we have the [World Happiness dataset](https://www.kaggle.com/unsdsn/world-happiness), which contains the statistics and happiness scores of approximately 150 countries around the globe. We want to construct a single-variable linear model to predict happiness scores using the GDP per capita feature column. The linear regression equation for the model can be expressed as: $$\hat{\text{score}} = w_{\text{area}} x_{\text{GDP per capita}} + b $$where $w_{\text{area}}$ is the learnable *weight* and $b$ is the *bias* (or *intercept*) and $x$ is the value of the GDP per capita for an individual sample. The goal is to choose the weight $w$ and the bias b such that on average, the predictions made according to our model *best fit* the true prices observed in the data. Linear Regression: What makes us happy?That's the question we'll try to answer in this notebook. But how? The first step is to find a dataset related to our question. The World Happiness (https://www.kaggle.com/unsdsn/world-happiness) dataset happens to be a great option so we'll us it here. The dataset contains information about the state of global happiness with happiness scores and rankings for almost every country on earth. Pretty cool right!We want to find out what makes us happy. We will use sinlge-variable linear regression to answer this question. In general, when you use any data science method (like linear regression) you'll want to do a few things:1. Explore and visualize the dataset2. Prepare the data for the model3. Build the model4. Train the model5. Evaluate the model6. Draw conclusionsWe'll start our analysis by exploring the World Happiness dataset.
###Code
# import the libraries we be need
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# Sklearn
from sklearn.linear_model import LinearRegression
from sklearn import metrics
#Adam's Contribution
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
1. Explore the dataset
###Code
# load the dataset into a dataframe
data_url = 'https://raw.githubusercontent.com/krmiddlebrook/intro_to_deep_learning/master/datasets/world-happiness/2019.csv'
happy2019 = pd.read_csv(data_url)
happy2019.head() # view the first 5 rows of the data
# how many rows and columns are in the dataset
happy2019.shape # looks like 156 rows (samples) and 9 columns (features)
###Output
_____no_output_____
###Markdown
2. Prepare the data for the modelTo prepare the dataset for a linear model (or any model really), we need to complete several tasks:1. Define the features ($x$) and labels ($y$) variables2. Split the dataset into a *training* set and a *test* set.3. Separate the features and the labels in the training/test sets.
###Code
# define the x and y variables
x_col = 'GDP per capita'
y_col = 'Score'
# split the dataset into a training set and a test set.
# we will use the test set in the final evaluation of our model.
train = happy2019.sample(frac=0.8, random_state=0)
test = happy2019.drop(train.index)
# separate the x (features) and y (labels) in the train/test datasets
train_features = train[x_col].values.reshape(-1, 1)
test_features = test[x_col].values.reshape(-1, 1)
train_labels = train[y_col].values.reshape(-1, 1)
test_labels = test[y_col].values.reshape(-1, 1)
# Easier Data Separation
#features = happy2019[x_col].values.reshape(-1, 1)
#labels = happy2019[y_col].values.reshape(-1, 1)
#train_features, test_features, train_labels, test_labels = train_test_split(features, labels, train_size=0.8)
print('train features shape:', train_features.shape)
print('train labels shape:', train_labels.shape)
print('test features shape:', test_features.shape)
print('test labels shape:', test_labels.shape)
print('first 5 test labels:\n', test_labels[:5])
###Output
train features shape: (125, 1)
train labels shape: (125, 1)
test features shape: (31, 1)
test labels shape: (31, 1)
first 5 test labels:
[[7.246]
[6.726]
[6.444]
[6.354]
[6.3 ]]
###Markdown
The above code returns a *training* and *test* dataset. The GDP per capita variable represents the features data and the happiness Score represents the labels. The train_features and train_labels arrays represent the features and labels of the training dataset, each containing 125 rows and 1 column. The test_features and test_labels arrays represent the features and labels of the test dataset, each containing 31 rows and 1 column.
###Code
plt.scatter(x=train_features, y=train_labels)
plt.title('Training Data')
plt.xlabel('GDP per capita')
plt.ylabel('Happiness Score')
plt.show()
###Output
_____no_output_____
###Markdown
Here is our training data. What Linear Regression does is that it will find the best line that goes through the middle of our training data. The best line will also compute the least training loss, which we will explain later. 3. Build the modelNow that we have prepared the data, it's time to build a model! Coding linear regression from scratch can be tedious, fortunately, the `Sklearn` library comes with a built in package in the `sklearn.linear_model` module under the `LinearRegression` class that makes building linear regression models a breeze. We'll use this class to build our linear regression model to predict the happiness Score ($y$) given the GDP per capita ($x$).
###Code
# build the LinearRegression model object
model = LinearRegression(fit_intercept=True)
###Output
_____no_output_____
###Markdown
4. Train the modelNow that we have a linear model (that was easy), we need to *train* it using the training dataset. We will use the `fit` method to "fit" the model to the data (i.e., train the model).
###Code
# fit the model with the training set train_features (x) and train_labels (y) data
model.fit(train_features, train_labels)
###Output
_____no_output_____
###Markdown
Now that we've trained the model, let's see what it's *mean absolute error* is on the training dataset. We use the `predict` method to make predictions given a dataset of features.
###Code
train_predictions = model.predict(train_features)
mae = metrics.mean_absolute_error(train_labels, train_predictions)
print('training set mean absolute error: ', round(mae, 4))
###Output
training set mean absolute error: 0.5576
###Markdown
The mean absolute error on the training set was approximately +/- 0.5576. Is this good? We leave that for you to decide. 5. Evaluate the modelNow that we've trained the model, it's time to evaluate it on unseen data using the test dataset, which we did not use while training the model. The model preformance on the test set will give us a sense of how well we expect it to predict happiness score given new GDP per capita data.
###Code
test_predictions = model.predict(test_features)
test_mae = metrics.mean_absolute_error(test_labels, test_predictions)
print('test set mean absolute error: ', round(test_mae, 4))
###Output
test set mean absolute error: 0.5152
###Markdown
The average (absolute) error is around +/- 0.5152 units for happiness Score. Is this good? We'll leave that decision up to you. Let's also visualize the prediction and real happiness Score values using the test set samples.
###Code
ax = plt.axes(aspect='equal')
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [Happiness Score]')
plt.ylabel('Predictions [Happiness Score]')
lims = [0, max(test_labels) + 1] # [0, 31]
plt.xlim(lims)
plt.ylim(lims)
# plots line y=x
# We want most points to be close to the line as possible
_ = plt.plot(lims, lims)
###Output
_____no_output_____
###Markdown
It looks like our model predicts reasonably well. Let's take a look at the error distribution.
###Code
errors = test_predictions.reshape(-1, 1) - test_labels
plt.hist(errors, bins = 10)
plt.xlabel("Prediction Error [Happiness Score]")
_ = plt.ylabel("Count")
###Output
_____no_output_____
###Markdown
The histogram shows that the errors aren't quite *Normally* distributed (also called *gaussian*), but we might expect that because the number of samples is very small.  Let us now see the best fitted line that the model was able to find.
###Code
b = float(model.intercept_[0])
print('b:', round(b,2))
w = float(model.coef_[0][0])
print('w:', round(w,2))
plt.scatter(x=train_features, y=train_labels)
plt.title('Training Data')
plt.xlabel('GDP per capita')
plt.ylabel('Happiness Score')
y = [b] + list((w*train_features + b).reshape(-1))
x = [0] + list(train_features.reshape(-1))
plt.plot(x, y, color='r')
plt.show()
###Output
b: 3.43
w: 2.22
###Markdown
6. Draw conclusionsWe built a single-variable linear regression model to predict happiness Score given a country's GDP per capita. The model achieved an average (absolute) error of about +/- 0.5152. We expect that a more complex model or more data samples or features could lead to better results. SummaryIn this lesson we introduced linear regression from a single variable (one feature) perspective. We covered several important techniques to handle regression problems including:- The model estimation function ($y = wx + b$).- Demonstrated preparing data for a model.- Used Skleanr to build and train a model.- Showed how to evaluate a model.
###Code
###Output
_____no_output_____
|
db_vanilla_Multivariate_Linear_Regression_Model.ipynb
|
###Markdown
Multivariate Linear Regression Predicting House Price from Size and Number of Bedrooms
###Code
# evaluate logistic regression on the breast cancer dataset with an ordinal encoding
from numpy import mean
from numpy import std
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OrdinalEncoder
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
# define the location of the dataset
url = "https://storage.googleapis.com/dataprep-staging-b4c6b8ff-9afc-4b23-a0d8-d480526baaa4/yz1268%40nyu.edu/jobrun/Untitled%20recipe%20%E2%80%93%204.csv/2021-08-16_23-54-42_00000000"
# load the dataset
#the data imported from my google storage with this url link above has this format:
#healthcare_coverage,age,race, ethnicity, gender, latitude, longitude,healthcare expense.
#for example, in order to read the first column, we can do data[:,0:1]. In order to read the column from second column to the sixth column, we can do data[:,1:7],which also follows the array operation in python.
dataset = read_csv(url, header=None)
# retrieve the array of data
data = dataset.values
# separate into input and output columns
X = data[:, 1:7]
y = data[:, 0:1]
print(X)
# split the dataset into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.01, random_state=1)
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
# Predicting the Test set results
y_pred = regressor.predict(X_test)
# define the model
model = LinearRegression()
# fit on the training set
model.fit(X_train, y_train)
# predict on test set
yhat = model.predict(X_test)
print('Coefficients: \n', model.coef_)
print(type(regressor.coef_))
#these params are what we need and got from the trainnnig the model. Now we can use these parameters to create a formular for pricing the health insurance quota.
# evaluate predictions
#accuracy = accuracy_score(y_test, yhat)
accuracy = mean_squared_error(y_test, yhat)
#this name of accuracy is a little misleading, to be more specific, it is mean_sqaured_error.
print('Accuracy: %.2f' % (accuracy))
# evaluate logistic regression on the breast cancer dataset with an ordinal encoding
from numpy import mean
from numpy import std
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OrdinalEncoder
from sklearn.metrics import accuracy_score
#from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor, ExtraTreesRegressor, GradientBoostingRegressor
from sklearn.metrics import mean_squared_error
# define the location of the dataset
url = "https://storage.googleapis.com/dataprep-staging-b4c6b8ff-9afc-4b23-a0d8-d480526baaa4/yz1268%40nyu.edu/jobrun/Untitled%20recipe%20%E2%80%93%204.csv/2021-08-16_23-54-42_00000000"
# load the dataset
dataset = read_csv(url, header=None)
# retrieve the array of data
data = dataset.values
# separate into input and output columns
#X = data[:, :-1].astype(str)
#y = data[:, -1].astype(str)
X = data[:, 1:7]
y = data[:, 0:1]
# split the dataset into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
# ordinal encode input variables
#ordinal_encoder = OrdinalEncoder()
#ordinal_encoder.fit(X_train)
#X_train = ordinal_encoder.transform(X_train)
#X_test = ordinal_encoder.transform(X_test)
# ordinal encode target variable
#label_encoder = LabelEncoder()
#label_encoder.fit(y_train)
#y_train = label_encoder.transform(y_train)
#y_test = label_encoder.transform(y_test)
# define the model
model = LinearRegression()
# fit on the training set
model.fit(X_train, y_train)
# predict on test set
yhat = model.predict(X_test)
#yhat is defined as the predicted value y
# evaluate predictions
#accuracy = accuracy_score(y_test, yhat)
#It's because accuracy_score is for classification tasks only. For regression you should use something different, for example:
accuracy = mean_squared_error(y_test, yhat)
print('Accuracy: %.2f' % (accuracy*100))
###Output
Accuracy: 156894731241.60
|
blogs/hello_jupyter_pelican.ipynb
|
###Markdown
Blogging with Jupyter and Pelican!In this blog entry, I'll talk about setting up a blog using Pelican and authoring entries using Jupyter notebooks. First, try to import some code.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x=np.linspace(-1,1,100)
y=x**2
plt.plot(x,y,linewidth=2)
plt.xlabel(r'$x$',fontsize=20)
plt.ylabel(r'$y$',fontsize=20)
plt.show()
###Output
_____no_output_____
|
diabetes_pred.ipynb
|
###Markdown
DIABETES PREDICTION This model predicts if a person is diabetic or not, based on parameters like Pregnancies, blood pressure, bmi, etc. IMPORTING DATASET
###Code
import numpy as np
import pandas as pd
data = pd.read_csv("diabetes.csv")
data.head(10)
###Output
_____no_output_____
###Markdown
ANALISING THE DATA
###Code
data.info()
data.isnull().sum()
X = data.iloc[:,:-1]
y = data.iloc[:,-1]
###Output
_____no_output_____
###Markdown
SPLITTING UP THE DATA
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 25,random_state = 0)
###Output
_____no_output_____
###Markdown
Applying classifiers AND evaluation LOGISTIC REGRESSION
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score,r2_score,classification_report
logreg = LogisticRegression(solver='lbfgs',max_iter=1000)
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_test)
acc_logreg1 = round(accuracy_score(y_pred, y_test) , 2)*100
print("Accuracy : ",acc_logreg1)
###Output
Accuracy : 96.0
###Markdown
K NEIGHBOR CLASSIFIER
###Code
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=7)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
acc_knn = round(accuracy_score(y_pred,y_test), 2) * 100
print("Accuracy :" ,acc_knn)
###Output
Accuracy : 84.0
###Markdown
RANDOM FOREST
###Code
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators = 6, criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
acc_logreg2 = round(accuracy_score(y_pred, y_test) , 2)*100
print("Accuracy : ",acc_logreg2)
###Output
Accuracy : 88.0
|
Notebook-Class-exercises/Step-4-Develop-Model-Task-6-Connect-the-Dots -Task7-Graph-Analytics.ipynb
|
###Markdown
Install Pygeohash libraryThis library provides functions for computing geohash Step 5 - Develop Model - Task 6 - Connect the dots & Task 7 - Graph Analytics - CLASS ASSIGNMENTS
###Code
!pip install pygeohash
###Output
Collecting pygeohash
Downloading https://files.pythonhosted.org/packages/2c/33/c912fa4476cedcd3ed9cd25c44c163583b92d319860438e6b632f7f42d0c/pygeohash-1.2.0.tar.gz
Building wheels for collected packages: pygeohash
Building wheel for pygeohash (setup.py) ... [?25l[?25hdone
Created wheel for pygeohash: filename=pygeohash-1.2.0-py2.py3-none-any.whl size=6162 sha256=3094842273f60a8a8bb2e706f2eb8583cbb5bf7e12c0cc2b25707e2b34b904ae
Stored in directory: /root/.cache/pip/wheels/3f/5f/14/989d83a271207dda28232746d63e737a2dbd88ea7f7a9db807
Successfully built pygeohash
Installing collected packages: pygeohash
Successfully installed pygeohash-1.2.0
###Markdown
Import pygeohash, networkx and Pandas librariesPygeohash - functions for converting latitude, longitude to geohash and related distance measurement utilitiesNetworkx - functions for creating, manipulating and querying open source network graphs Pandas - Python functions for table manipuation
###Code
import pygeohash as pgh
import networkx as nx
import pandas as pd
###Output
_____no_output_____
###Markdown
Connect to datasets using Google drive or local files
###Code
using_Google_colab = True
using_Anaconda_on_Mac_or_Linux = False
using_Anaconda_on_windows = False
if using_Google_colab:
from google.colab import drive
drive.mount('/content/drive')
###Output
Mounted at /content/drive
###Markdown
DM6.1 Open Notebook, read Lat, Long and compute Geohash - Activity 1
###Code
if using_Google_colab:
state_location = pd.read_csv('/content/drive/MyDrive/COVID_Project/input/state_lat_long.csv')
if using_Anaconda_on_Mac_or_Linux:
state_location = pd.read_csv('../input/state_lat_long.csv')
if using_Anaconda_on_windows:
state_location = pd.read_csv(r'..\input\state_lat_long.csv')
state_location.loc[0:5,]
###Output
_____no_output_____
###Markdown
Apply a function call to convert Lat, Long to Geohash
###Code
def lat_long_to_geohash(lat_long):
return pgh.encode(lat_long[0], lat_long[1])
state_location['geohash'] = state_location[['latitude',
'longitude']].apply(lat_long_to_geohash,
axis=1)
state_location.iloc[0:10,]
###Output
_____no_output_____
###Markdown
Truncate geohash to first two characters
###Code
state_location['geohash'] = state_location.geohash.str.slice(stop=2)
state_location.iloc[0:10,]
###Output
_____no_output_____
###Markdown
DM6.2 - Design Graph representaing States and Geohash Find neighbors by sorting the states by 2 character geohash codes attached to each state Initialize Graph and create state and geohash concepts as nodes
###Code
GRAPH_ID = nx.DiGraph()
GRAPH_ID.add_node('state')
GRAPH_ID.add_node('geohash')
###Output
_____no_output_____
###Markdown
Create a node for each state
###Code
state_list = state_location.state.values
for state in state_list:
GRAPH_ID.add_node(state)
GRAPH_ID.add_edge('state', state, label='instance')
###Output
_____no_output_____
###Markdown
Create a list of unique geohash codes and create a node for each geohash
###Code
geohash_list = state_location.geohash.values
for geohash in geohash_list:
GRAPH_ID.add_node(geohash)
GRAPH_ID.add_edge('geohash', geohash, label='instance')
df_state_geohash = state_location[['state', 'geohash']]
for state_geohash in df_state_geohash.itertuples():
GRAPH_ID.add_edge(state_geohash.state, state_geohash.geohash,
label='located_at')
GRAPH_ID.add_edge(state_geohash.geohash, state_geohash.state,
label='locates',
distance=0.0)
###Output
_____no_output_____
###Markdown
DM6.3 - Which states are in Geohash 9q Find geohash associated with California and Naveda
###Code
list(GRAPH_ID.neighbors('CA'))
list(GRAPH_ID.neighbors('NV'))
###Output
_____no_output_____
###Markdown
Find States locsted with geohash '9q'
###Code
list(GRAPH_ID.neighbors('9q'))
###Output
_____no_output_____
###Markdown
DM6.4 Sort the data and find neighbors sharing geohash Find states located with geohah for all geohashes
###Code
for geohash in GRAPH_ID['geohash']:
print("Geohash: ", geohash, "States: ", list(GRAPH_ID.neighbors(geohash)))
###Output
_____no_output_____
###Markdown
Step 4 - Develop Model - Task 7 - Graph Analytics - DM7.1 Activity 1 - Find number of state and geohash nodes in a graph
###Code
len(list (GRAPH_ID.neighbors('geohash')))
len(list (GRAPH_ID.neighbors('state')))
###Output
_____no_output_____
###Markdown
DM7.2 - Find all neighboring states for NY Connect neighboring geohash codes if the distance is less than 1,000 km
###Code
for geohash_1 in geohash_list:
for geohash_2 in geohash_list:
if geohash_1 != geohash_2:
distance = pgh.geohash_haversine_distance(geohash_1, geohash_2)
if distance < 1000000:
GRAPH_ID.add_edge(geohash_1, geohash_2, label='near')
###Output
_____no_output_____
###Markdown
Find path length from NY to all nodes (states and geohashes)
###Code
neighbor_path_length = nx.single_source_dijkstra_path_length(GRAPH_ID, 'NY', weight='distance')
neighbor_path_length
###Output
_____no_output_____
###Markdown
Make a list of all nodes covered in the path length and then find those nodes which are states and less than or equal to 3 hops
###Code
neighbor_states = neighbor_path_length.keys()
state_list = (list (GRAPH_ID.neighbors('state')))
for state in state_list:
if state in neighbor_states:
if neighbor_path_length[state] <= 3:
print(state)
###Output
CT
DC
DE
IL
IN
MA
MD
ME
MI
NH
NJ
NY
OH
PA
RI
VA
VT
WI
###Markdown
DM7.3 - Find all neighboring states for each state
###Code
for state_1 in state_list:
neighbor_path_length = nx.single_source_dijkstra_path_length(GRAPH_ID, state_1)
neighbor_state_list = neighbor_path_length.keys()
next_door_list = []
for state_2 in neighbor_state_list:
if state_1 != state_2:
if state_2 in state_list:
if neighbor_path_length[state_2] <=3:
next_door_list.append(state_2)
if next_door_list:
print(state_1, next_door_list)
###Output
AL ['GA', 'MS', 'FL', 'KY', 'NC', 'SC', 'TN', 'WV']
AR ['KS', 'MO', 'OK', 'AZ', 'NM', 'UT', 'IA', 'NE', 'SD', 'LA', 'TX']
AZ ['NM', 'UT', 'AR', 'KS', 'MO', 'OK', 'CA', 'NV', 'CO', 'WY']
CA ['NV', 'AZ', 'NM', 'UT', 'ID', 'OR']
CO ['WY', 'AZ', 'NM', 'UT', 'IA', 'NE', 'SD', 'ID', 'OR', 'MT']
CT ['MA', 'NH', 'NJ', 'NY', 'PA', 'RI', 'VT', 'DC', 'DE', 'MD', 'VA', 'IL', 'IN', 'MI', 'OH', 'WI', 'ME']
DC ['DE', 'MD', 'VA', 'CT', 'MA', 'NH', 'NJ', 'NY', 'PA', 'RI', 'VT', 'KY', 'NC', 'SC', 'TN', 'WV']
DE ['DC', 'MD', 'VA', 'CT', 'MA', 'NH', 'NJ', 'NY', 'PA', 'RI', 'VT', 'KY', 'NC', 'SC', 'TN', 'WV']
FL ['AL', 'GA', 'MS']
GA ['AL', 'MS', 'FL', 'KY', 'NC', 'SC', 'TN', 'WV']
IA ['NE', 'SD', 'AR', 'KS', 'MO', 'OK', 'CO', 'WY', 'IL', 'IN', 'MI', 'OH', 'WI', 'MN', 'ND']
ID ['OR', 'CA', 'NV', 'CO', 'WY', 'WA']
IL ['IN', 'MI', 'OH', 'WI', 'CT', 'MA', 'NH', 'NJ', 'NY', 'PA', 'RI', 'VT', 'IA', 'NE', 'SD', 'KY', 'NC', 'SC', 'TN', 'WV']
IN ['IL', 'MI', 'OH', 'WI', 'CT', 'MA', 'NH', 'NJ', 'NY', 'PA', 'RI', 'VT', 'IA', 'NE', 'SD', 'KY', 'NC', 'SC', 'TN', 'WV']
KS ['AR', 'MO', 'OK', 'AZ', 'NM', 'UT', 'IA', 'NE', 'SD', 'LA', 'TX']
KY ['NC', 'SC', 'TN', 'WV', 'AL', 'GA', 'MS', 'DC', 'DE', 'MD', 'VA', 'IL', 'IN', 'MI', 'OH', 'WI']
LA ['TX', 'AR', 'KS', 'MO', 'OK']
MA ['CT', 'NH', 'NJ', 'NY', 'PA', 'RI', 'VT', 'DC', 'DE', 'MD', 'VA', 'IL', 'IN', 'MI', 'OH', 'WI', 'ME']
MD ['DC', 'DE', 'VA', 'CT', 'MA', 'NH', 'NJ', 'NY', 'PA', 'RI', 'VT', 'KY', 'NC', 'SC', 'TN', 'WV']
ME ['CT', 'MA', 'NH', 'NJ', 'NY', 'PA', 'RI', 'VT']
MI ['IL', 'IN', 'OH', 'WI', 'CT', 'MA', 'NH', 'NJ', 'NY', 'PA', 'RI', 'VT', 'IA', 'NE', 'SD', 'KY', 'NC', 'SC', 'TN', 'WV']
MN ['ND', 'IA', 'NE', 'SD', 'MT']
MO ['AR', 'KS', 'OK', 'AZ', 'NM', 'UT', 'IA', 'NE', 'SD', 'LA', 'TX']
MS ['AL', 'GA', 'FL', 'KY', 'NC', 'SC', 'TN', 'WV']
MT ['CO', 'WY', 'MN', 'ND', 'WA']
NC ['KY', 'SC', 'TN', 'WV', 'AL', 'GA', 'MS', 'DC', 'DE', 'MD', 'VA', 'IL', 'IN', 'MI', 'OH', 'WI']
ND ['MN', 'IA', 'NE', 'SD', 'MT']
NE ['IA', 'SD', 'AR', 'KS', 'MO', 'OK', 'CO', 'WY', 'IL', 'IN', 'MI', 'OH', 'WI', 'MN', 'ND']
NH ['CT', 'MA', 'NJ', 'NY', 'PA', 'RI', 'VT', 'DC', 'DE', 'MD', 'VA', 'IL', 'IN', 'MI', 'OH', 'WI', 'ME']
NJ ['CT', 'MA', 'NH', 'NY', 'PA', 'RI', 'VT', 'DC', 'DE', 'MD', 'VA', 'IL', 'IN', 'MI', 'OH', 'WI', 'ME']
NM ['AZ', 'UT', 'AR', 'KS', 'MO', 'OK', 'CA', 'NV', 'CO', 'WY']
NV ['CA', 'AZ', 'NM', 'UT', 'ID', 'OR']
NY ['CT', 'MA', 'NH', 'NJ', 'PA', 'RI', 'VT', 'DC', 'DE', 'MD', 'VA', 'IL', 'IN', 'MI', 'OH', 'WI', 'ME']
OH ['IL', 'IN', 'MI', 'WI', 'CT', 'MA', 'NH', 'NJ', 'NY', 'PA', 'RI', 'VT', 'IA', 'NE', 'SD', 'KY', 'NC', 'SC', 'TN', 'WV']
OK ['AR', 'KS', 'MO', 'AZ', 'NM', 'UT', 'IA', 'NE', 'SD', 'LA', 'TX']
OR ['ID', 'CA', 'NV', 'CO', 'WY', 'WA']
PA ['CT', 'MA', 'NH', 'NJ', 'NY', 'RI', 'VT', 'DC', 'DE', 'MD', 'VA', 'IL', 'IN', 'MI', 'OH', 'WI', 'ME']
RI ['CT', 'MA', 'NH', 'NJ', 'NY', 'PA', 'VT', 'DC', 'DE', 'MD', 'VA', 'IL', 'IN', 'MI', 'OH', 'WI', 'ME']
SC ['KY', 'NC', 'TN', 'WV', 'AL', 'GA', 'MS', 'DC', 'DE', 'MD', 'VA', 'IL', 'IN', 'MI', 'OH', 'WI']
SD ['IA', 'NE', 'AR', 'KS', 'MO', 'OK', 'CO', 'WY', 'IL', 'IN', 'MI', 'OH', 'WI', 'MN', 'ND']
TN ['KY', 'NC', 'SC', 'WV', 'AL', 'GA', 'MS', 'DC', 'DE', 'MD', 'VA', 'IL', 'IN', 'MI', 'OH', 'WI']
TX ['LA', 'AR', 'KS', 'MO', 'OK']
UT ['AZ', 'NM', 'AR', 'KS', 'MO', 'OK', 'CA', 'NV', 'CO', 'WY']
VA ['DC', 'DE', 'MD', 'CT', 'MA', 'NH', 'NJ', 'NY', 'PA', 'RI', 'VT', 'KY', 'NC', 'SC', 'TN', 'WV']
VT ['CT', 'MA', 'NH', 'NJ', 'NY', 'PA', 'RI', 'DC', 'DE', 'MD', 'VA', 'IL', 'IN', 'MI', 'OH', 'WI', 'ME']
WA ['ID', 'OR', 'MT']
WI ['IL', 'IN', 'MI', 'OH', 'CT', 'MA', 'NH', 'NJ', 'NY', 'PA', 'RI', 'VT', 'IA', 'NE', 'SD', 'KY', 'NC', 'SC', 'TN', 'WV']
WV ['KY', 'NC', 'SC', 'TN', 'AL', 'GA', 'MS', 'DC', 'DE', 'MD', 'VA', 'IL', 'IN', 'MI', 'OH', 'WI']
WY ['CO', 'AZ', 'NM', 'UT', 'IA', 'NE', 'SD', 'ID', 'OR', 'MT']
###Markdown
DM7.4 - Find path between two states
###Code
nx.dijkstra_path(GRAPH_ID, 'NY', 'CA', weight='distance')
nx.dijkstra_path(GRAPH_ID, 'OR', 'CA', weight='distance')
GRAPH_ID.nodes()
nx.single_source_dijkstra_path_length(GRAPH_ID, 'NY')
###Output
_____no_output_____
|
py_files/22. HyperTune - XGB.ipynb
|
###Markdown
Hyper - tuning - RF modelThe feature space, described below, and the RandomForrest classifier gives us the best validation AUC out of all other models. We will now use `BayesSearchCV` to hyper-tune the classifier on the feature space.Engineered two different types of features,1. n_gram similarity between each pair of questions2. min/max/avg distance between words in a single question. Currently using the following metrics, * euclidean * cosine * city block or manhattan **Pipeline**1. Stack questions2. Clean questions - now lower cases all words to better lemmatize proper nouns3. UNION 1. n_gram similarity 2. min/max/avg distance4. Lemmatize questions5. UNION 1. n_gram similarity 2. min/max/avg distances6. UNION together both sets of features7. Random Forrest**Changes*** Fix the n_estimators to 500 and search other parameters
###Code
# data manipulation
import utils
import pandas as pd
import numpy as np
# modeling
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import NMF
from sklearn.preprocessing import FunctionTransformer
from sklearn.model_selection import cross_validate, StratifiedKFold, train_test_split
from xgboost import XGBClassifier
# parameter search
from skopt import BayesSearchCV
from skopt.space import Real, Categorical, Integer
###Output
_____no_output_____
###Markdown
Load data
###Code
X_train = utils.load('X_train')
y_train = utils.load('y_train')
model_name = 'xgb_hypertune'
###Output
_____no_output_____
###Markdown
Text transformation and Feature Engineer pipes
###Code
# text transformation pipes
clean_text = Pipeline(
[
('stack', FunctionTransformer(utils.stack_questions, validate=False)),
('clean', FunctionTransformer(utils.clean_questions, validate=False))
]
)
lemma_text = Pipeline(
[
('lemma', FunctionTransformer(utils.apply_lemma, validate=False))
]
)
# feature engineering pipes
single_question_pipe = Pipeline(
[
('dist', FunctionTransformer(utils.add_min_max_avg_distance_features, validate=False)),
('unstack', FunctionTransformer(utils.unstack_questions, validate=False))
]
)
pair_question_pipe = Pipeline(
[
('ngram_sim', FunctionTransformer(utils.calc_ngram_similarity, kw_args={'n_grams':[1, 2, 3]}, validate=False))
]
)
# build features on the cleaned text only
clean_text_features = Pipeline(
[
('clean', clean_text),
('feats', FeatureUnion(
[
('pair', pair_question_pipe),
('single', single_question_pipe)
]
))
]
)
# build features on the cleanned and lemmatized text features
lemma_text_features = Pipeline(
[
('clean', clean_text),
('lemma', lemma_text),
('feats', FeatureUnion(
[
('pair', pair_question_pipe),
('single', single_question_pipe)
]
))
]
)
# pre-process pipe
feature_transformation = Pipeline(
[
('feats', FeatureUnion(
[
('clean_text_features', clean_text_features),
('lemma_text_features', lemma_text_features)
]
))
]
)
%%time
try:
X_train_transform = utils.load('X_train_transform')
except:
X_train_transform = feature_transformation.transform(X_train) ## this takes a really long time
utils.save(X_train_transform, 'X_train_transform')
###Output
CPU times: user 64 ms, sys: 40 ms, total: 104 ms
Wall time: 457 ms
###Markdown
Configure the search
###Code
skf = StratifiedKFold(n_splits=3, random_state=42)
# fixed params
xgb_params = {
# 'n_estimators': 584,
'n_jobs': 4,
'random_state': 42
}
# XGBClassifier()
# tuning parameters -- start with estimators as I know 500 gives a very good AUC
xgb_search_params = {
'n_estimators': Integer(500,2000),
'max_depth': Integer(3, 10),
'learning_rate': Real(0, 1),
'gamma': Real(0, 1),
'reg_lambda': Real(0, 1)
}
bayes_params = {
'estimator': XGBClassifier(**xgb_params),
'scoring': 'roc_auc',
'search_spaces': xgb_search_params,
'n_iter': 50,
'cv': skf,
'n_jobs': 3
}
search_cv = BayesSearchCV(**bayes_params)
###Output
_____no_output_____
###Markdown
Callbacks
###Code
def print_score_progress(optim_results):
''' Prints the best score, current score, and current iteration
'''
current_results = pd.DataFrame(search_cv.cv_results_)
print(f'Iteration: {current_results.shape[0]}')
print(f'Current AUC: {current_results.tail(1).mean_test_score.values[0]:.6f}')
print(f'Best AUC: {search_cv.best_score_:.6f}')
print()
def save_best_estimator(optim_results):
''' Saves best estimator
'''
current_results = pd.DataFrame(search_cv.cv_results_)
best_score = search_cv.best_score_
current_score = current_results.tail(1).mean_test_score.values[0]
if current_score == best_score:
model = f'tuned_models/{model_name}_{best_score:.6f}'
print(f'Saving: {model}')
print()
utils.save(search_cv, model)
%%time
search_cv_results = search_cv.fit(X_train_transform, y_train, callback=[print_score_progress, save_best_estimator])
pd.DataFrame(search_cv_results.cv_results_).sort_values('mean_test_score', ascending=False)
search_cv_results.best_estimator_.get_params() #AUC .868429
###Output
_____no_output_____
###Markdown
Best n-estimators: **AUC .868429**{'bootstrap': True, 'class_weight': None, 'criterion': 'gini', 'max_depth': None, 'max_features': 'auto', 'max_leaf_nodes': None, 'min_impurity_decrease': 0.0, 'min_impurity_split': None, 'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 584, 'n_jobs': 4, 'oob_score': False, 'random_state': 42, 'verbose': 0, 'warm_start': False}
###Code
from sklearn.model_selection import train_test_split
X_t, X_v, y_t, y_v = train_test_split(X_train_transform, y_train, stratify=y_train, random_state=42, test_size = 0.33)
rf = RandomForestClassifier(n_estimators=500, n_jobs=4, random_state=42, verbose=1)
rf.fit(X_t, y_t)
y_v_probs = rf.predict_proba(X_v)[:, 1]
from sklearn import metrics
metrics.roc_auc_score(y_v, y_v_probs)
###Output
_____no_output_____
###Markdown
XGB built-in tuning
###Code
model = XGBClassifier(n_estimators=5000, random_state=42, n_jobs=-1)
X_t, X_v, y_t, y_v = train_test_split(X_train_transform, y_train, stratify=y_train, random_state=42, test_size = 0.33)
eval_set = [(X_t, y_t), (X_v, y_v)]
eval_metric = ["auc"]
%time model.fit(X_t, y_t, eval_metric=eval_metric, eval_set=eval_set, verbose=True)
import matplotlib.pyplot as plt
utils.save(model, 'xgb_tuned_model')
model = utils.load('xgb_tuned_model')
model.evals_result_.keys()
plt.plot(model.evals_result_['validation_0']['auc'][1000:], label='train')
plt.plot(model.evals_result_['validation_1']['auc'][1000:], label='validation');
###Output
_____no_output_____
|
spark_foundation_data_science_task.ipynb
|
###Markdown
**The Spark Foundation** **Name-Aniket Vats** **Domain-Data science and Business Analaytics** **Linear Regression with Python Scikit Learn**In this section we will see how the Python Scikit-Learn library for machine learning can be used to implement regression functions. We will start with simple linear regression involving two variables. **Simple Linear Regression**In this regression task we will predict the percentage of marks that a student is expected to score based upon the number of hours they studied. This is a simple linear regression task as it involves just two variables.
###Code
#importing all libraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
#getting dataset
df=pd.read_excel("/content/score.xlsx")
df.head()
print(f"following data has:{df.shape[0]} rows and {df.shape[1]} columns")
#checking for null values
df.isna().sum()
#visualizing the dataset
df.plot(x="Hours",y="Scores",style='o')
plt.title("Hours vs Scores")
plt.xlabel("Hours")
plt.ylabel("Scores")
plt.grid()
plt.show()
#defining Independent and Dependent variable
X=df[['Hours']]
y=df['Scores']
print(f"Independent variable:{X}")
print("---------------------------------------")
print(f"Dependent Variable:{y}")
#splitting train and test data
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=0)
print("\nX_train :\n", X_train[:5])
print("-------------------------")
print("\nX_test :\n", X_test[:5])
#now importing linear regression model and building it
from sklearn.linear_model import LinearRegression
lr=LinearRegression()
lr.fit(X_train,y_train)
print("model building complete")
preds=lr.predict(X_test)
pd.DataFrame({"Actual values":y_test,"Predicted values":preds})
plt.scatter(X_test,y_test, s = 70, label='Actual')
plt.scatter(X_test,preds, s = 90, marker = '^', label='Predicted')
plt.xlabel('Years of Experience')
plt.ylabel('Salary')
plt.legend();
plt.grid();
plt.show();
sns.regplot(X,y,data=df)
plt.show()
#Evaluating the model
from sklearn.metrics import r2_score
print(f"accuracy of model:{r2_score(y_test,preds)}")
#predicting score of student who studied 9.25hr
hour=[[9.25]]
score=lr.predict(hour)
print(f"no. of hours:{9.25} and predicted score:{score}")
###Output
no. of hours:9.25 and predicted score:[93.69173249]
###Markdown
**Evaluating the model**The final step is to evaluate the performance of algorithm. This step is particularly important to compare how well different algorithms perform on a particular dataset. For simplicity here, we have chosen the mean square error. There are many such metrics.
###Code
#calculating MSE error
from sklearn import metrics
print(f"mean_squared_error:{metrics.mean_absolute_error(y_test,preds)}")
###Output
mean_squared_error:4.183859899002982
###Markdown
**Conclusion** I was successfully able to carry-out Prediction using Supervised ML task and was able to evaluate the model's performance on various parameters **Thank You**
###Code
###Output
_____no_output_____
###Markdown
**The Spark Foundation** **Name-Aniket Vats** **Domain-Data science and Business Analaytics** **Linear Regression with Python Scikit Learn**In this section we will see how the Python Scikit-Learn library for machine learning can be used to implement regression functions. We will start with simple linear regression involving two variables. **Simple Linear Regression**In this regression task we will predict the percentage of marks that a student is expected to score based upon the number of hours they studied. This is a simple linear regression task as it involves just two variables.
###Code
#importing all libraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
#getting dataset
df=pd.read_excel("/content/score.xlsx")
df.head()
print(f"following data has:{df.shape[0]} rows and {df.shape[1]} columns")
#checking for null values
df.isna().sum()
#visualizing the dataset
df.plot(x="Hours",y="Scores",style='o')
plt.title("Hours vs Scores")
plt.xlabel("Hours")
plt.ylabel("Scores")
plt.grid()
plt.show()
#defining Independent and Dependent variable
X=df[['Hours']]
y=df['Scores']
print(f"Independent variable:{X}")
print("---------------------------------------")
print(f"Dependent Variable:{y}")
#splitting train and test data
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=0)
print("\nX_train :\n", X_train[:5])
print("-------------------------")
print("\nX_test :\n", X_test[:5])
#now importing linear regression model and building it
from sklearn.linear_model import LinearRegression
lr=LinearRegression()
lr.fit(X_train,y_train)
print("model building complete")
preds=lr.predict(X_test)
pd.DataFrame({"Actual values":y_test,"Predicted values":preds})
plt.scatter(X_test,y_test, s = 70, label='Actual')
plt.scatter(X_test,preds, s = 90, marker = '^', label='Predicted')
plt.xlabel('Years of Experience')
plt.ylabel('Salary')
plt.legend();
plt.grid();
plt.show();
sns.regplot(X,y,data=df)
plt.show()
#Evaluating the model
from sklearn.metrics import r2_score
print(f"accuracy of model:{r2_score(y_test,preds)}")
#predicting score of student who studied 9.25hr
hour=[[9.25]]
score=lr.predict(hour)
print(f"no. of hours:{9.25} and predicted score:{score}")
###Output
no. of hours:9.25 and predicted score:[93.69173249]
###Markdown
**Evaluating the model**The final step is to evaluate the performance of algorithm. This step is particularly important to compare how well different algorithms perform on a particular dataset. For simplicity here, we have chosen the mean square error. There are many such metrics.
###Code
#calculating MSE error
from sklearn import metrics
print(f"mean_squared_error:{metrics.mean_absolute_error(y_test,preds)}")
###Output
mean_squared_error:4.183859899002982
###Markdown
**Conclusion** I was successfully able to carry-out Prediction using Supervised ML task and was able to evaluate the model's performance on various parameters **Thank You**
###Code
###Output
_____no_output_____
|
notebooks/numpy_exercises.ipynb
|
###Markdown
Numpy exercisesRefer back to the Intro to `numpy` lesson from yesterday for reminders!***1) Using `numpy`, create an array that starts with the number `1`, ends with the number `1000`, counting by twos. 2a) Using `numpy`, create an array of 1,000,000 linearly-spaced values between zero and one.2b) Now do the same task using a `list` in pure Python code (no `numpy`)2c) Use the `%timeit` magic function to test which method is faster. 3) Using `numpy`, create a 1000-element array of normally distributed values ($\mu=0\,,\sigma=1$), and calculate the mean of that array. 4) Using the array you created in 3, replace the minimum value with `0`. 5) Use `numpy`'s indexing features to get (a) the negative values of the array from 4, (b) the non-negative values of the array from 4. 6) Use the lines below to generate a quiescent light curve with gaussian noise (flux vs. time measurements common in astronomy). Then, use `numpy`'s indexing features to do a "3$\sigma$ clip" (remove any values greater than three standard deviations from the mean flux).
###Code
import numpy as np
light_curve = 1 + np.random.randn(10000)
###Output
_____no_output_____
|
Data-Analysis-And-Visualization/Data Analysis and Visualization.ipynb
|
###Markdown
Describe() method generates descriptive statistics that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values.This method deals with only numeric columns. To look for categorical variables as well use include="all".count - tells count of the columns or tells number of non empty rowsmean - mean of the columnstd - standard deviation of the columnmin - minimum value of the column25% - 25th percentile/quarter that is median of first and second quarter of numbers50% - median of numbers75% - 75th percentile/quarter that is median of third and fourth quartermax - maximum value of the column
###Code
heart.describe()
continuous_columns = ['creatinine_phosphokinase','ejection_fraction',
'platelets','serum_creatinine','serum_sodium','time']
# Check for null values. Dataset does not has any null values.
heart.isnull().sum()
p=heart[continuous_columns].hist(figsize = (20,20))
###Output
_____no_output_____
###Markdown
We can see skewness for few columns. More about skewness can be read from:https://www.statisticshowto.com/probability-and-statistics/skewed-distribution/
###Code
# Finding outliers
# Using sns boxplot
plt.figure(figsize=(15,5))
sns.boxplot(x='variable', y='value', data=pd.melt(heart[continuous_columns]))
fig, ax = plt.subplots(nrows = 2, ncols = 3)
fig.tight_layout()
plt.figure(figsize=(20,10))
i=0
for row in range(0,2):
for col in range(0,3):
ax[row,col].boxplot(heart[continuous_columns[i]])
ax[row,col].set_title(continuous_columns[i])
i = i+1
plt.show()
heart['log_creatinine_phosphokinase'] = np.log(heart['creatinine_phosphokinase'])
sns.boxplot(data=heart['log_creatinine_phosphokinase'])
heart['log_creatinine_phosphokinase'].hist()
# Check for outliers
# Outliers are the values which are 3 standard deviations away from mean
z = np.abs(stats.zscore(heart[continuous_columns]))
z.shape
heart.shape
heart.DEATH_EVENT.value_counts(normalize=True)
x_ticks=np.arange(0,9000,100)
plt.figure(figsize=(20, 10))
plt.xticks(x_ticks, rotation=90)
plt.scatter('creatinine_phosphokinase','age', data=heart)
#'serum_creatinine','platelets'
plt.hist( 'creatinine_phosphokinase', data=heart)
heart[heart.creatinine_phosphokinase==7861]
heart.DEATH_EVENT.value_counts()
heart[heart.creatinine_phosphokinase>=600].shape
# Normal range of CPK is in between 20-200. This is a enzyme that catalyzes the reaction of
# creatine and adenosine triphosphate (ATP) to phosphocreatine and adenosine diphosphate (ADP)
# Mainly available in cardiac, skeletal muscles. If these muscles are damaged then this enzyme leaks in
# blood stream. Thus. CPK is indication of muscles damage.
heart[(heart.creatinine_phosphokinase>=200)&(heart.DEATH_EVENT==1)].shape
plt.scatter( 'age', 'creatinine_phosphokinase', data=heart, color='darkblue',s=1)
plt.scatter('age', 'serum_creatinine', data=heart, color='red', s=4)
###Output
_____no_output_____
###Markdown
Let's analyse serum cratinine Source - https://pubmed.ncbi.nlm.nih.gov/9056611/Elevated serum creatinine has been associated with increased mortality in hypertensive persons, the elderly, and patients with myocardial infarction or stroke in whom cardiovascular disease is the major cause of death. We have examined the relationship between serum creatinine concentration and the risk of major ischemic heart disease and stroke events and all-cause mortality in a general population of middle-aged men.
###Code
plt.hist('serum_creatinine', data=heart)
x_ticks=np.arange(0,10,0.5)
plt.xticks(x_ticks)
plt.scatter('serum_creatinine','DEATH_EVENT',data=heart)
heart[(heart.DEATH_EVENT==1) & (heart.serum_creatinine >=1.3) & (heart.high_blood_pressure==1)].shape
###Output
_____no_output_____
|
Accruals_script.ipynb
|
###Markdown
First, we have to perform a lot of wrangling on the reports generated by Workday
###Code
import pandas as pd
import openpyxl as oxl
from subprocess import call
from os import remove, chdir, path, getcwd
from io import BytesIO
from sys import argv
from zipfile import ZipFile, ZIP_DEFLATED
import lxml
# TODO: move this to .settings to add clarity for future editor
WD_report_name = "EXP031-RPT-Process-Accruals_with_Expense_Report.xlsx"
WD2_report_name = "EXP032-RPT-Process-Accruals-_No_Expense.xlsx"
wrangled_WD_report_name = "EXP031_plus_EXP032_wrangled.csv"
wrangled_WD2_report_name = "EXP032_wrangled.csv"
MF_JE_template_name = "MF_JE_template.xlsx"
master_file_name = "WD_Accruals_Master.xlsx"
input_folder_path = "../Script/Input/"
generic_GL_account = 46540000
def generate_vbs_script():
# this VBS script converts XLSX files to CSV format for faster processing
vbscript = """if WScript.Arguments.Count < 3 Then
WScript.Echo "Please specify the source and the destination files. Usage: ExcelToCsv <xls/xlsx source file> <csv destination file> <worksheet number (starts at 1)>"
Wscript.Quit
End If
csv_format = 6
Set objFSO = CreateObject("Scripting.FileSystemObject")
src_file = objFSO.GetAbsolutePathName(Wscript.Arguments.Item(0))
dest_file = objFSO.GetAbsolutePathName(WScript.Arguments.Item(1))
worksheet_number = CInt(WScript.Arguments.Item(2))
Dim oExcel
Set oExcel = CreateObject("Excel.Application")
Dim oBook
Set oBook = oExcel.Workbooks.Open(src_file)
oBook.Worksheets(worksheet_number).Activate
oBook.SaveAs dest_file, csv_format
oBook.Close False
oExcel.Quit
"""
try:
with open("ExcelToCsv.vbs", "wb") as f:
f.write(vbscript.encode("utf-8"))
except Exception as e:
print(e.msg)
print("VBS script for converting xlsx files to csv could not be generated.")
def load_csv(xlsx_file_path, has_sheets=False, skiprows=None, usecols=None):
# this function maps the generate_vbs_script() function to the input XLSX file
if has_sheets:
# sheet numbers to use; using the first three and I don't know how to retrieve no of sheets, hence the fixed numbers
sheets = map(str, range(1, 3))
sheet_dataframes = []
for sheet in sheets:
csv_file_path = "../Script/{}{}{}".format(input_folder_path, sheet, ".csv")
call(["cscript.exe", "../Script/ExcelToCsv.vbs", xlsx_file_path, csv_file_path, sheet, r"//B"])
try:
sheet_dataframe = pd.read_csv(csv_file_path, encoding="latin-1", engine="c", usecols=usecols)
except Exception as e:
print(e.msg)
print("Sheets could not be converted to CSV format.")
sheet_dataframes.append(sheet_dataframe)
return tuple(sheet_dataframes)
else:
csv_file_path = "{}{}".format(xlsx_file_path[:-4], "csv")
# //B is for batch mode; this is to avoid spam on the console :)
call(["cscript.exe", "../Script/ExcelToCsv.vbs", xlsx_file_path, csv_file_path, str(1), r"//B"])
if skiprows:
try:
data = pd.read_csv(csv_file_path, skiprows=skiprows, encoding="latin-1", engine="c", usecols=usecols)
except Exception as e:
print(e.msg)
print("Something went wrong... make sure report names weren't changed or debug the load_csv function")
else:
try:
data = pd.read_csv(csv_file_path, encoding="latin-1", engine="c", usecols=usecols)
except Exception as e:
print(e.msg)
print("Something went wrong... make sure report names weren't changed or debug the load_csv function")
return data
def load_all():
file_names = [WD_report_name, WD2_report_name, MF_JE_template_name, master_file_name]
dataframes = []
WD1_required_cols = ["Entity Code", "Cost Center", "Expense Report Number", "Expense Item", "Net Amount LC"]
WD2_required_cols = ["Transaction ID", "Billing Amount", "Currency", "Report Cost Location"]
# the script will be used by load_csv() to convert XLSX to CSV for faster processing
generate_vbs_script()
for file_name in file_names:
if file_name == WD_report_name:
usecols = WD1_required_cols
skiprows = [0]
elif file_name == WD2_report_name:
usecols = WD2_required_cols
elif file_name == MF_JE_template_name:
with open("{}{}".format(input_folder_path, MF_JE_template_name), "rb") as f:
in_mem_file = BytesIO(f.read())
MF_JE_template = oxl.load_workbook(in_mem_file)
dataframes.append(MF_JE_template)
continue
else:
# this will produce two CSVs from the two first two sheets of WD_Accruals_Master.xlsm
cc_to_ba, accounts = load_csv("{}{}".format(input_folder_path, file_name), has_sheets=True)
dataframes.extend([cc_to_ba, accounts])
break
df = load_csv("{}{}".format(input_folder_path, file_name), skiprows=skiprows, usecols=usecols)
dataframes.append(df)
# resetting params
usecols = skiprows = None
return dataframes
def collect_garbage():
# remove no longer needed files
WD_report_byproduct = "{}{}".format(WD_report_name[:-5], ".csv")
WD2_report_byproduct = "{}{}".format(WD2_report_name[:-5], ".csv")
excel_to_csv_macro_byproducts = ["1.csv", "2.csv", WD_report_byproduct, WD2_report_byproduct]
for byproduct in excel_to_csv_macro_byproducts:
remove("{}{}".format(input_folder_path, byproduct))
remove("ExcelToCsv.vbs")
def initial_cleanup():
# TODO: deal with scientific-notation-like business areas converting to sci-notation
global WD_report, WD2_report
collect_garbage()
# remove rows with total amount 0 or less / unfortunately, pandas nor Python are able to convert amounts in the format:
# 123,456.00 to float, hence need to either use localization (bad idea as the process is WW), or use below workaround
try:
WD_report["Net Amount LC"] = WD_report["Net Amount LC"].apply(lambda x: x.replace(",", "") if type(x) != float else x)
except:
pass
WD_report["Net Amount LC"] = WD_report["Net Amount LC"].map(float)
WD_report = WD_report[WD_report["Net Amount LC"] > 0]
try:
WD2_report["Billing Amount"] = WD2_report["Billing Amount"].apply(lambda x: x.replace(",", "") if type(x) != float else x)
except:
pass
# for card expenses, negative amounts are put in parentheses, e.g. (100.00); below line removes lines with such amounts
WD2_report = WD2_report[WD2_report["Billing Amount"].apply(lambda x: "(" not in x)]
WD2_report["Billing Amount"] = WD2_report["Billing Amount"].map(float)
# filer out lines with missing cost center/cost location, as this data is critical to generating an accrual
WD_report.dropna(subset=["Cost Center"], inplace=True)
WD2_report.dropna(subset=["Report Cost Location"], inplace=True)
# delete the duplicate cost centers/descriptions inside Cost Center/Cost Location column
WD_report["Cost Center"] = WD_report["Cost Center"].astype("str").map(lambda x: x.split()[0])
WD2_report["Report Cost Location"] = WD2_report["Report Cost Location"].astype("str").map(lambda x: x.split()[0])
# add "Company code" column as it will be used by generate_output() to generate a separate folder for each company code
WD2_report["Company code"] = WD2_report["Report Cost Location"].apply(lambda x: x[:4])
WD_report = WD_report[WD_report["Expense Report Number"].apply(lambda x: "Cancelled" not in x)]
def vlookup(report, what, left_on, right_on):
merged = report.merge(what, left_on=left_on, right_on=right_on, how="left")
return merged
def run_vlookups():
global WD_report, WD2_report
accounts = pd.DataFrame(accounts_file["Account"]).astype(int)
master_data_to_join = master_data_file[["Business Area", "Profit Center", "MRU", "Functional Area"]]
WD_report = vlookup(WD_report, accounts, left_on=[WD_report["Expense Item"], WD_report["Entity Code"]], right_on=[accounts_file["Expense Item name"], accounts_file["Subsidiary"]])
# the account number is provided separately for each country. However, all countries have the same account for a given category, so we need to remove these duplicate rows.
# in case any country has a separate account for a given category in the future, the script will still work
WD_report = vlookup(WD_report, master_data_to_join, WD_report["Cost Center"], master_data_file["Cost Center"])
WD2_report = vlookup(WD2_report, master_data_to_join, WD2_report["Report Cost Location"], master_data_file["Cost Center"])
def final_cleanup():
global WD_report
global accounts_file
travel_journal_item_account = 46540000
company_celebration_account = 46900000
german_debit_account = 46920000
# add vlookup exceptions
no_of_items = WD_report.shape[0]
for row_index in range(no_of_items):
category = str(WD_report["Expense Item"].iloc[row_index]) # for some reason this column is loaded as float, hence the str()
if "Travel Journal Item" in category:
WD_report.loc[row_index, "Account"] = travel_journal_item_account
# WD_report.set_value(index, "Acc#", travel_journal_item_account)
if "Company Celebration" in category:
# WD_report.set_value(index, "Acc#", company_celebration_account)
WD_report.loc[row_index, "Account"] = company_celebration_account
# controllership requirement: change all 9999 BA's to 1019, 1059 to 1015
WD_report["Business Area"] = WD_report["Business Area"].apply(lambda x: "1019" if str(x) == "9999" else x)
WD_report["Business Area"] = WD_report["Business Area"].apply(lambda x: "1015" if str(x) == "1059" else x)
# this is to stop Excel from reading e.g. 2E00 as a number in scientific notation
WD_report["Business Area"] = WD_report["Business Area"].map(str)
# note that this also overrides the above two exceptions, which are changed to the german account
WD_report.loc[WD_report["Entity Code"] == "DESA", "Account"] = german_debit_account
# ensure that account number is provided, and that it is an integer
try:
WD_report["Account"] = WD_report["Account"].map(int)
except:
# this means that some account numbers were not found for a company code-category combination -> use an account
# for the same category, but another company code (all CCs should use the same account)
lines_with_missing_account = WD_report[WD_report["Account"].isnull()]
# remove above lines from WD_report
WD_report = WD_report[~WD_report["Account"].isnull()]
# remove duplicate categories, effectively leaving the first found acc # for a given category, which is what is going to be assigned for missing values
deduplicated_accounts_file = accounts_file.drop_duplicates(subset=["Expense Item name"])
accounts = pd.DataFrame(deduplicated_accounts_file["Account"])
# dropping Account column so that merge does not produce useless new columns
#deduplicated_accounts_file.drop("Account", axis=1, inplace=True)
#accounts.rename(columns={"Account": "Acc#"}, inplace=True)
lines_with_missing_account.drop("Account", axis=1, inplace=True)
merged = lines_with_missing_account.merge(accounts, left_on=lines_with_missing_account["Expense Item"],
right_on=deduplicated_accounts_file["Expense Item name"], how="left")
WD_report = WD_report.append(merged)
WD_report["Account"] = WD_report["Account"].map(int)
# add a checksum so we can group by BA + PC combinations
WD_report["Checksum"] = WD_report["Profit Center"].astype(str) + WD_report["Business Area"]
WD2_report["Checksum"] = WD2_report["Profit Center"].astype(str) + WD2_report["Business Area"]
# restore column order after df.append()
final_column_order = ["Entity Code", "Cost Center", "Expense Report Number", "Expense Item", "Net Amount LC", "Account", "Business Area", "Profit Center", "Functional Area", "MRU", "Checksum"]
WD_report = WD_report.reindex(columns=final_column_order)
# add the generic account number to all card expenses
WD2_report["Account"] = generic_GL_account
# drop currency column, rename WD2's columns to match WD's, and append it
WD1_to_WD2_columns_mapping = {"Transaction ID": "Expense Report Number", "Billing Amount": "Net Amount LC",
"Report Cost Location": "Cost Center", "Company code": "Entity Code"}
WD2_report.drop("Currency", axis=1, inplace=True)
WD2_report.rename(columns=WD1_to_WD2_columns_mapping, inplace=True)
WD_report = WD_report.append(WD2_report)
# rename the column with card transaction numbers back to a senslible name
WD2_report.rename(columns={"Expense Report Number": "Transaction number"}, inplace=True)
###Output
_____no_output_____
###Markdown
Add below cell to the final code; ensure getcwd() return correct path
###Code
# chdir(path.dirname(argv[0]))
# getcwd()
WD_report, WD2_report, MF_JE_template, master_data_file, accounts_file = load_all()
initial_cleanup()
run_vlookups()
final_cleanup()
wrangled_WD_report_save_path = "../Script/Output/wrangled_reports/" + wrangled_WD_report_name
wrangled_WD2_report_save_path = "../Script/Output/wrangled_reports/" + wrangled_WD2_report_name
WD_report.to_csv(wrangled_WD_report_save_path, index=False)
WD2_report.to_csv(wrangled_WD2_report_save_path, index=False)
###Output
_____no_output_____
###Markdown
Now let's use these wrangled files to generate CSVs in the format accepted by Netsuite
###Code
from urllib.request import urlopen
from json import loads
wrangled_WD_report = pd.read_csv("../Script/Output/wrangled_reports/{}".format(wrangled_WD_report_name))
WD_report_groupby_input = wrangled_WD_report[["Entity Code", "Checksum", "Account", "Expense Report Number", "Net Amount LC", "MRU", "Functional Area"]]
grouped_by_cc = WD_report_groupby_input.groupby("Entity Code", as_index=False)
JE_csv_columns = ["ACCOUNT", "DEBIT", "CREDIT", "TAX CODE", "LINE MEMO", "MRU", "BUSINESS AREA", "PROFIT CENTER", "FUNCTIONAL AREA",
"DATE", "POSTING PERIOD", "ACCOUNTING BOOK", "SUBSIDIARY", "CURRENCY", "MEMO", "REVERSAL DATE", "TO SUBSIDIARY",
"TRADING PARTNER", "TRADING PARTNER CODE", "UNIQUE ID"]
last_day_of_previous_month = pd.to_datetime("today") - pd.tseries.offsets.MonthEnd(1)
date_cut = last_day_of_previous_month.strftime("%m.%y")
first_day_of_current_month = pd.to_datetime("today").replace(day=1).strftime("%m/%d/%Y")
AP_account = 25702400 # the account from which the money will flow
def generate_exchange_rates():
# https://openexchangerates.org
exchange_rates_api_key = "11f20df062814531be891cc0173702a6"
api_call = f"https://openexchangerates.org/api/latest.json?app_id={exchange_rates_api_key}"
rates_api_response = urlopen(api_call)
rates_api_response_str = rates_api_response.read().decode("ascii")
rates_api_response_dict = loads(rates_api_response_str)
rates = rates_api_response_dict["rates"]
# feel free to update company codes/currencies
currencies_in_scope = {"AUSA": "AUD", "BESA": "EUR", "BGSA": "BGN", "BRSA": "BRL", "CASA": "CAD", "CHSD": "CHF", "CNSA": "CNY",
"CRSB": "CRC", "CZSA": "CZK", "DESA": "EUR", "DKSA": "DKK", "ESSA": "EUR", "FRSA": "EUR", "GBF0": "USD",
"GBSA": "GBP", "IESA": "EUR", "IESB": "EUR", "ILSA": "ILS", "ILSB": "ILS", "INSA": "INR", "INSB": "INR",
"INSD": "INR", "ITSA": "EUR", "JPSA": "JPY", "LUSB": "EUR", "MXSC": "MXN", "NLSC": "EUR", "PHSB": "PHP",
"PLSA": "PLN", "PRSA": "PYG", "ROSA": "RON", "RUSA": "RUB", "SESA": "SEK", "TRSA": "TRY", "USMS": "USD",
"USSM": "USD", "USSN": "USD"}
exchange_rates_to_usd = {}
for company_code in currencies_in_scope:
currency = currencies_in_scope[company_code]
# the rates from API are from USD to x; we need from x to USD
try:
exchange_rate_to_usd = 1/rates[currency]
except:
continue
if company_code in exchange_rates_to_usd:
continue
else:
exchange_rates_to_usd[company_code] = exchange_rate_to_usd
return exchange_rates_to_usd
def generate_output(cc): # cc = Company Code
CSV_file_path = "../Script/Output/upload_to_Netsuite/{}_Accrual_WD_{}.csv".format(cc, date_cut)
MF_JE_template_save_path = "../Script/Output/upload_to_Sharepoint/{}_{}{}".format(cc, MF_JE_template_name[:-5], ".xlsx")
JE_csv = pd.DataFrame(columns=JE_csv_columns)
cur_cc_data = grouped_by_cc.get_group(cc)
grouped_by_checksum = cur_cc_data.groupby(["Checksum"])
posting_month = last_day_of_previous_month.strftime("%b")
posting_year = last_day_of_previous_month.strftime("%Y")
posting_period = "{} {}".format(posting_month, posting_year)
# this is a way to track row number, so that groups can be input to consecutive rows
cur_group_start_row = 0
for checksum, g in grouped_by_checksum:
business_area = checksum[-4:] # BA is the last 4 chars of checksum
profit_center = checksum[:5] # PC is the first 5 chars of checksum
general_description = "WD {} ACCRUALS {} FY{}".format(cc, posting_month, posting_year)
for i in range(cur_group_start_row, cur_group_start_row + len(g)):
# for each line for a given checksum (BA and PC combination), retrieve its Acc# culumn value and input it
# into the next free cell in the "ACCOUNT" column in the JE csv form
JE_csv.loc[i, "ACCOUNT"] = g.iloc[i - cur_group_start_row]["Account"]
JE_csv.loc[i, "DEBIT"] = g.iloc[i - cur_group_start_row]["Net Amount LC"]
JE_csv.loc[i, "LINE MEMO"] = g.iloc[i - cur_group_start_row]["Expense Report Number"] + " Accrual"
# Note that even though the template has a TRANSACTION DATE - DAY field, it still passes the whole date in mm/dd/YYYY format
JE_csv.loc[i, "DATE"] = last_day_of_previous_month.strftime("%m/%d/%Y")
JE_csv.loc[i, "POSTING PERIOD"] = posting_period
JE_csv.loc[i, "SUBSIDIARY"] = cc
JE_csv.loc[i, "MEMO"] = general_description
JE_csv.loc[i, "REVERSAL DATE"] = first_day_of_current_month
JE_csv.loc[i, "MRU"] = g.iloc[i - cur_group_start_row]["MRU"]
JE_csv.loc[i, "FUNCTIONAL AREA"] = g.iloc[i - cur_group_start_row]["Functional Area"]
# here we're filling out the AP account row
last_group_start_row = cur_group_start_row
cur_group_start_row += len(g)
JE_csv.loc[cur_group_start_row, "ACCOUNT"] = AP_account
JE_csv.loc[cur_group_start_row, "CREDIT"] = JE_csv.loc[last_group_start_row:cur_group_start_row, "DEBIT"].sum()
JE_csv.loc[cur_group_start_row, "LINE MEMO"] = general_description
JE_csv.loc[cur_group_start_row, "BUSINESS AREA"] = business_area
JE_csv.loc[cur_group_start_row, "PROFIT CENTER"] = profit_center
JE_csv.loc[cur_group_start_row, "DATE"] = last_day_of_previous_month.strftime("%m/%d/%Y")
JE_csv.loc[cur_group_start_row, "POSTING PERIOD"] = posting_period
JE_csv.loc[cur_group_start_row, "SUBSIDIARY"] = cc
JE_csv.loc[cur_group_start_row, "MEMO"] = general_description
JE_csv.loc[cur_group_start_row, "REVERSAL DATE"] = first_day_of_current_month
cur_group_start_row += 1
JE_amount_local = JE_csv["CREDIT"].sum(skipna=True)
exchange_rates = generate_exchange_rates()
amount_in_usd = JE_amount_local * exchange_rates[cc]
to_generate = []
# company requirement
if amount_in_usd > 5000:
to_generate.append(cc)
if cc in to_generate:
JE_csv.to_csv(CSV_file_path, index=False)
print("{} CSV file generated :)".format(cc))
# TODO: since wb.save() closes the workbook, we would need to reopen it on each loop... hence doing it
# with the deeper openpyxl.writer.excel.ExcelWriter's write_data() method
archive = ZipFile(MF_JE_template_save_path,'w', ZIP_DEFLATED, allowZip64=True)
writer = oxl.writer.excel.ExcelWriter(MF_JE_template, archive)
#writer._comments = [] TODO: do this for each sheet
writer.write_data()
#MF_JE_template.save(MF_JE_template_save_path)
print("{} template file generated :)".format(cc))
MF_JE_template.close()
for key, group in grouped_by_cc:
company_code = key
generate_output(company_code)
###Output
AUSA CSV file generated :)
AUSA template file generated :)
DESA CSV file generated :)
|
ERT/fault/Faultzone.ipynb
|
###Markdown
Modelling and inversion of a fault zoneThis example was taken from a book chapter (Tanner et al. 2020) illustrating how ERT works using a hypothetical fault structure.Tanner, D.C., Buness, H., Igel, J., Günther, T., Gabriel, G., Skiba, P., Plenefisch, T., Gestermann, N. & Walter, T. (2020): Fault Detection. in: Tanner, C.D. & Brandes, C. (Eds.): Understanding Faults, 380p., Elsevier, p. 81-146, doi:10.1016/B978-0-12-815985-9.00003-5.
###Code
import pygimli as pg
import pygimli.meshtools as mt
from pygimli.physics import ert
left = mt.createWorld(start=[-300, -150], end=[0, 0], layers=[-60, -80])
for b in left.boundaries():
if b.center().x() == 0:
b.setMarker(0)
pg.viewer.showMesh(left, markers=True, showBoundary=True);
right = mt.createWorld(start=[0, -150], end=[300, 0], layers=[-30, -50])
for b in right.boundaries():
if b.center().x() == 0:
b.setMarker(0)
pg.viewer.showMesh(right, markers=True, showBoundary=True);
scheme = ert.createData(range(-200, 201, 25), 'dd')
print(scheme)
world = mt.mergePLC([left, right])
for pos in scheme.sensorPositions():
world.createNode(pos, marker=-99)
world.createNode(pos+pg.Pos(0, -3))
pg.show(world);
mesh = mt.createMesh(world, quality=34.6)
print(mesh)
ax, _ = pg.show(mesh, markers=True, showMesh=True)
ax.plot(pg.x(scheme), pg.y(scheme), 'mo');
rhomap = [[1, 1000], [2, 10], [3, 100]]
data = ert.simulate(mesh, scheme, rhomap, noiseLevel=0.03, noiseAbs=10e-6)
pg.show(data);
MAT = ert.simulate(mesh, scheme, rhomap, calcOnly=True, returnFields=True)
pot = MAT[1] - MAT[0]
ax, cb = pg.show(mesh, rhomap, logScale=True,
cMap='Spectral_r', label='resistivity')
ax.set_ylim(-150, 40)
pg.viewer.mpl.drawStreams(ax, mesh, pot)
# pg.mplviewer.drawMeshBoundaries(ax, world)
ax.set_xlabel('x [m]')
ax.set_ylabel('z [m]')
# draw electrode positions
ax.text(-190, 15, 'I', ha='center', va='bottom')
for xi in pg.x(data.sensorPositions()):
ax.plot([xi, xi], [0, 10], 'k-')
if xi < 200 and xi > -160:
ax.text(xi+10, 15, 'U', ha='center', va='bottom')
# ax.figure.savefig('fault-rescurrent.pdf', bbox_inches='tight')
res = pg.solver.parseMapToCellArray(rhomap, mesh)
# pg.show(mesh, res, showMesh=True); # would do the same
mm = pg.Mesh(mesh)
for c in mm.cells():
c.setMarker(1)
# bigMesh = pg.meshtools.appendTriangleBoundary(mesh, marker=0)
# mgr = ert.ERTManager(data)
mgr = ert.ERTManager()
mgr.setData(data)
mgr.setMesh(mm)
# mgr.mesh = mm
# mgr.fop.region(0).setBackground(True)
# resBig = pg.solver.parseMapToCellArray(rhomap, mesh)
mgr.fop.createJacobian(res)
j0 = mgr.fop.jacobian()[0]
print(mgr.paraDomain, len(j0))
j1 = mgr.fop.jacobian()[-1] * res / data('rhoa')[data.size()-1]
cmax = 0.03
aS, _ = pg.show(mesh, j1, cMap='bwr', cMin=-cmax, cMax=cmax,
xlabel='x [m]', ylabel='z [m]')#, label='sensitivity')
# aS.figure.savefig('fault-sens.pdf', bbox_inches='tight')
mgr = ert.ERTManager()
mgr.setData(data)
mgr.invert(zWeight=0.1, paraDX=0.1)
aR, _ = mgr.showResult(cMin=10, cMax=1000, cMap='Spectral_r')
pg.viewer.mpl.drawMeshBoundaries(aR, world, hideMesh=True, fitView=False,
linewidth=0.1, xlabel='x [m]', ylabel='z [m]')
äaR.figure.savefig('fault-res.pdf', bbox_inches='tight')
###Output
_____no_output_____
|
Labs/Lab2/Romil/seeds_svm.ipynb
|
###Markdown
Seeds Dataset with Support Vectore Machines[EDA for This dataset](https://github.com/romilsiddhapura/ml_practices_2018/blob/master/Labs/Lab1/Romil/Notebooks/Seeds_EDA.ipynb) [Data for the same](https://github.com/romilsiddhapura/ml_practices_2018/blob/master/Labs/Lab1/Romil/Data/seeds_dataset.csv)
###Code
#importing necessary packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import svm
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.preprocessing import StandardScaler
from yellowbrick.features import ParallelCoordinates
from matplotlib.colors import ListedColormap
from sklearn.model_selection import learning_curve,validation_curve
from sklearn.model_selection import ShuffleSplit
from yellowbrick.classifier import ClassificationReport, ConfusionMatrix
#reading csv file
names = ['Area','Perimeter','Compactness','length_kernel','width_kernel','asy_coefficient','len_kernel_groove','target']
data = pd.read_csv('../../Lab1/Romil/Data/seeds_dataset.csv',header=None,names=names)
data.head()
#Selecting all features
X = data.drop(['target'], axis=1).values
y = data.target.values
#Selecting Area and asymmetric coefficient as main features
X = data.drop(['target','Perimeter','Compactness','length_kernel','width_kernel','len_kernel_groove'],axis=1).values
y = data.target.values
#Splitting dataset into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Scaling train and testing datasets
###Code
sc = StandardScaler()
sc.fit(X_train)
# Scaling the train and test sets.
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
#Made a model object having linear kernel
model = svm.SVC(kernel = 'linear',C = 100, gamma = 1)
#Fitting the model with standardized data
model.fit(X_train_std,y_train)
###Output
_____no_output_____
###Markdown
checking the model's parameters
###Code
model.coef_
model.intercept_
###Output
_____no_output_____
###Markdown
Checking the accuracy
###Code
model.score(X_test_std,y_test)
###Output
_____no_output_____
###Markdown
Visualization
###Code
# get the separating hyperplane
w = model.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(-4,4)
yy = a * xx - (model.intercept_[0]) / w[1]
w1 = model.coef_[1]
a1 = -w1[0] / w1[1]
xx1 = np.linspace(-4,4)
yy1 = a1 * xx1 - (model.intercept_[1]) / w1[1]
w2 = model.coef_[2]
a2 = -w2[0] / w2[1]
xx2 = np.linspace(-4,4)
yy2 = a2 * xx2 - (model.intercept_[2]) / w2[1]
# plot the parallels to the separating hyperplane that pass through the
# support vectors (margin away from hyperplane in direction
# perpendicular to hyperplane). This is sqrt(1+a^2) away vertically in
# 2-d.
margin = 1 / np.sqrt(np.sum(model.coef_ ** 2))
yy_down = yy - np.sqrt(1 + a ** 2) * margin
yy_up = yy + np.sqrt(1 + a ** 2) * margin
yy_down1 = yy1 - np.sqrt(1 + a1 ** 2) * margin
yy_up1 = yy1 + np.sqrt(1 + a1 ** 2) * margin
yy_down2 = yy2 - np.sqrt(1 + a2 ** 2) * margin
yy_up2 = yy2 + np.sqrt(1 + a2 ** 2) * margin
plt.figure()
plt.clf()
plt.plot(xx, yy, 'r-')
plt.plot(xx, yy_down, 'k--')
plt.plot(xx, yy_up, 'k--')
plt.plot(xx1, yy1, 'r-')
plt.plot(xx1, yy_down1, 'k--')
plt.plot(xx1, yy_up1, 'k--')
#plt.plot(xx2, yy2, 'r-')
#plt.plot(xx2, yy_down2, 'k--')
#plt.plot(xx2, yy_up2, 'k--')
plt.scatter(model.support_vectors_[:, 0], model.support_vectors_[:, 1], s=80, zorder=1, edgecolors='k',color = 'r')
plt.scatter(X_train_std[:, 0], X_train_std[:, 1], c=y_train, zorder=10, cmap=plt.cm.Paired, edgecolors='k')
plt.axis('tight')
###Output
_____no_output_____
###Markdown
Just checking out prediction and original label
###Code
model.predict(X_test_std)
y_test
#Assigning y_pred to the prediction of test dataset
y_pred = model.predict(X_test_std)
print(metrics.accuracy_score(y_test,y_pred,normalize=False), 'correctly labelled out of',X_test_std.shape[0])
###Output
58 correctly labelled out of 63
###Markdown
Plotting the learning curve
###Code
# Defining the function to plot the learning curve:
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=None, train_sizes=np.linspace(.1, 1.0, 5)):
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
# Plotting for Logistic Regression
title = "Learning Curves (Logisic Regression)"
cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0)
estimator = svm.SVC(kernel='linear',gamma = 1)
plot_learning_curve(estimator, title, X, y, ylim=(0.7, 1.01), cv=cv, n_jobs=4)
###Output
_____no_output_____
###Markdown
Observation``` As we can say from the graph of the learning curve that both training and cross-validation score are ended up around 0.91 which is good.``` Plotting the confusion matrix
###Code
#Plotting the confusion matrix to understand the true positives and negatives and the false positives and negatives
cm = ConfusionMatrix(model, classes=[1,2,3])
cm.score(X_test_std, y_test)
cm.poof()
###Output
_____no_output_____
###Markdown
** more number at diagonals that means more number of true positives**
###Code
# Generating the classification report containing measures of precision, recall and F1-score
visualizer = ClassificationReport(model, support=True)
visualizer.fit(X_train_std, y_train)
visualizer.score(X_test_std, y_test)
visualizer.poof()
###Output
_____no_output_____
|
Examples/Data Access Examples.ipynb
|
###Markdown
Data Access ExamplesThe data modules provide a way to access the underlying data and transform it to facilitate analysis. This includes:- Data retrieval using SQL- Creation of mutation matrices- Creation of vectors of dependent variables
###Code
%matplotlib inline
import init
from microbepy.common import constants as cn
from microbepy.common import isolate
from microbepy.common.range_constraint import RangeConstraint
from microbepy.common.study_context import nextStudyContext
from microbepy.common.study_context import StudyContext
from microbepy.common import util
from microbepy.data.model_data_provider import ModelDataProvider, ModelDataDualProvider
import copy
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Data ModelMicrobepy assumes that data are organized in terms of:- Isolate: This is the microbial community (a generalization of the usual definition of isolate)- Mutation: Changes to the genome- Culture: Phenotype information obtained from a cultureAn isolate is described in terms of the following:- Evolutionary line (often, just line)- Transfer time (time at which the isolate was obtained)- End point dilution (abbreviated EPD) or None if not applicable- Clone (an index of same genome organisms on a plate) or None if not applicable- Species or None if multiple species- Experiment (just CI in these data)For example, the isolate HA3.152.10.01.D.CI has evoluationary line HA3, transfer 152, EPD 10, clone 1, and species DVH.A mutation is specified by its an affected gene (if applicable), position in the genome, nucleotides in the reference genome, and the changed nucleotides. For example, DVU2451.2555217.CA.C is a mutation in the DVU gene DVU2451 at position 2555217 that changes the nucelotides CA to C.The culture is a string that uniquely identifies each single or paired incubation of microbes.These data are combined into a single table called ``genotype_phenotype``. The keys are the isolate (key_isolate), mutation (key_mutation), and culture (key_culture). The table ``genotype`` only contains information related to isolates and mutation. Details of the columns in these tables can be found in microbepy.common.constants.py.Mutations may be at different ``granularities`` as indicated by the names of columns that contain mutation values.- KEY_MUTATION - identifies a single change to the genome- GGENE_ID - mutations to the same gene are aggregated in that the gene is mutated if at least one mutation is present for a nucleotide in the genome; intergenic mutations are identified by position- GENE_ID - only considers genes- COG, EC - higher level classifications of genesThe database file is specified in the ``.microbepy`` directory (in the user's home directory) in the file ``config.py``. SQL Access to DataThe function ``util.readSQL`` queries the data repository and returns a dataframe with the columns specified in the query.
###Code
sql_cmd = "select key_isolate, key_mutation, key_culture from genotype_phenotype where transfer = 152"
df = util.readSQL(sql_cmd)
df.head()
###Output
_____no_output_____
###Markdown
Note that many of the results are for end point dilutions since the clone and species are "\*" (None). To obtain true isolates, the query can be modified.
###Code
sql_cmd = """
select key_isolate, key_mutation, key_culture from genotype_phenotype
where transfer = 152 and species in ('D', 'M')
"""
df = util.readSQL(sql_cmd)
df.head()
###Output
_____no_output_____
###Markdown
To obtain phenotype information, we had columns for ``rate`` and ``yield``.
###Code
sql_cmd = """
select key_isolate, key_mutation, key_culture, rate, yield from genotype_phenotype
where transfer = 152 and species in ('D', 'M')
"""
df = util.readSQL(sql_cmd)
df.head()
###Output
_____no_output_____
###Markdown
Mutation MatrixThe mutation matrix is a dataframe representation of a matrix.The columns are values of mutations at a specified granularity (e.g., KEY_MUTATION, GGENE_ID).Values are either 1 (mutation is present) or 0 (mutation is absent).The row index is the community in which the mutation was found. For paired-isolate cultures,a tuple is used.Dependent variables growth and yield (the phenotypes) are provided as a dataframe as well, with the samerow index structure as the associated mutation matrix.The class ``ModelDataProvider`` is used to construct the mutation matrix and the dependent variables(s).In computational studies, we often iterate through a range of values for the dependent variable, choice of mutation column, and evolutionary line. Rather than writing similar loops repeatedly, we have created a python class called StudyContext that specifies the values to use.
###Code
# Mutation matrix for
provider = ModelDataProvider(StudyContext(depvar=cn.RATE, mutation_column=cn.KEY_MUTATION))
provider.do() # Initialize the matrices
print(provider.df_X.head())
provider.df_y.head()
provider = ModelDataProvider(StudyContext(depvar=cn.YIELD, mutation_column=cn.GGENE_ID))
provider.do() # Initialize the matrices
print(provider.df_X.head())
provider.df_y.head()
###Output
DVH__.1007324 DVH__.1010502 \
key_isolate_dvh key_isolate_mmp
HA2.152.01.01.D.CI HA2.152.01.01.M.CI 0.0 0.0
HA2.152.01.02.D.CI HA2.152.01.02.M.CI 0.0 0.0
HA2.152.01.03.D.CI HA2.152.01.03.M.CI 0.0 0.0
HA2.152.05.01.D.CI HA2.152.05.01.M.CI 0.0 0.0
HA2.152.05.03.D.CI HA2.152.05.03.M.CI 0.0 0.0
DVH__.1259554 DVH__.1297045 \
key_isolate_dvh key_isolate_mmp
HA2.152.01.01.D.CI HA2.152.01.01.M.CI 0.0 0.0
HA2.152.01.02.D.CI HA2.152.01.02.M.CI 0.0 0.0
HA2.152.01.03.D.CI HA2.152.01.03.M.CI 0.0 0.0
HA2.152.05.01.D.CI HA2.152.05.01.M.CI 0.0 0.0
HA2.152.05.03.D.CI HA2.152.05.03.M.CI 0.0 0.0
DVH__.1297047 DVH__.1313341 \
key_isolate_dvh key_isolate_mmp
HA2.152.01.01.D.CI HA2.152.01.01.M.CI 0.0 1.0
HA2.152.01.02.D.CI HA2.152.01.02.M.CI 0.0 1.0
HA2.152.01.03.D.CI HA2.152.01.03.M.CI 0.0 1.0
HA2.152.05.01.D.CI HA2.152.05.01.M.CI 0.0 1.0
HA2.152.05.03.D.CI HA2.152.05.03.M.CI 0.0 1.0
DVH__.1555094 DVH__.184033 \
key_isolate_dvh key_isolate_mmp
HA2.152.01.01.D.CI HA2.152.01.01.M.CI 0.0 0.0
HA2.152.01.02.D.CI HA2.152.01.02.M.CI 0.0 0.0
HA2.152.01.03.D.CI HA2.152.01.03.M.CI 0.0 0.0
HA2.152.05.01.D.CI HA2.152.05.01.M.CI 0.0 0.0
HA2.152.05.03.D.CI HA2.152.05.03.M.CI 0.0 0.0
DVH__.2053158 DVH__.2060393 \
key_isolate_dvh key_isolate_mmp
HA2.152.01.01.D.CI HA2.152.01.01.M.CI 0.0 0.0
HA2.152.01.02.D.CI HA2.152.01.02.M.CI 0.0 0.0
HA2.152.01.03.D.CI HA2.152.01.03.M.CI 0.0 0.0
HA2.152.05.01.D.CI HA2.152.05.01.M.CI 0.0 0.0
HA2.152.05.03.D.CI HA2.152.05.03.M.CI 0.0 0.0
... MMP1314 MMP1361 \
key_isolate_dvh key_isolate_mmp ...
HA2.152.01.01.D.CI HA2.152.01.01.M.CI ... 0.0 0.0
HA2.152.01.02.D.CI HA2.152.01.02.M.CI ... 0.0 0.0
HA2.152.01.03.D.CI HA2.152.01.03.M.CI ... 0.0 0.0
HA2.152.05.01.D.CI HA2.152.05.01.M.CI ... 0.0 0.0
HA2.152.05.03.D.CI HA2.152.05.03.M.CI ... 0.0 0.0
MMP1362 MMP1511 MMP1591 MMP1612 \
key_isolate_dvh key_isolate_mmp
HA2.152.01.01.D.CI HA2.152.01.01.M.CI 1.0 1.0 0.0 0.0
HA2.152.01.02.D.CI HA2.152.01.02.M.CI 1.0 1.0 0.0 0.0
HA2.152.01.03.D.CI HA2.152.01.03.M.CI 1.0 1.0 0.0 0.0
HA2.152.05.01.D.CI HA2.152.05.01.M.CI 1.0 1.0 0.0 0.0
HA2.152.05.03.D.CI HA2.152.05.03.M.CI 1.0 1.0 0.0 0.0
MMP1718 MMP__.1080889 MMP__.1439720 \
key_isolate_dvh key_isolate_mmp
HA2.152.01.01.D.CI HA2.152.01.01.M.CI 1.0 0.0 0.0
HA2.152.01.02.D.CI HA2.152.01.02.M.CI 1.0 0.0 0.0
HA2.152.01.03.D.CI HA2.152.01.03.M.CI 1.0 0.0 0.0
HA2.152.05.01.D.CI HA2.152.05.01.M.CI 1.0 0.0 0.0
HA2.152.05.03.D.CI HA2.152.05.03.M.CI 1.0 1.0 0.0
MMP__.308883
key_isolate_dvh key_isolate_mmp
HA2.152.01.01.D.CI HA2.152.01.01.M.CI 0.0
HA2.152.01.02.D.CI HA2.152.01.02.M.CI 0.0
HA2.152.01.03.D.CI HA2.152.01.03.M.CI 0.0
HA2.152.05.01.D.CI HA2.152.05.01.M.CI 0.0
HA2.152.05.03.D.CI HA2.152.05.03.M.CI 0.0
[5 rows x 112 columns]
###Markdown
Note that values of dependent variables are standardized. So a value of 0 is the mean of the observations.
###Code
# Extracting the communities from mutation matrix
print('\n'.join(["%s, %s" % (d, m) for d, m in provider.df_X.index.tolist()]))
###Output
HA2.152.01.01.D.CI, HA2.152.01.01.M.CI
HA2.152.01.02.D.CI, HA2.152.01.02.M.CI
HA2.152.01.03.D.CI, HA2.152.01.03.M.CI
HA2.152.05.01.D.CI, HA2.152.05.01.M.CI
HA2.152.05.03.D.CI, HA2.152.05.03.M.CI
HA2.152.08.01.D.CI, HA2.152.08.01.M.CI
HA2.152.08.03.D.CI, HA2.152.08.03.M.CI
HA2.152.09.01.D.CI, HA2.152.09.01.M.CI
HA2.152.09.02.D.CI, HA2.152.09.02.M.CI
HA2.152.09.03.D.CI, HA2.152.09.03.M.CI
HR2.152.01.01.D.CI, HR2.152.01.01.M.CI
HR2.152.01.02.D.CI, HR2.152.01.02.M.CI
HR2.152.01.03.D.CI, HR2.152.01.03.M.CI
HR2.152.05.01.D.CI, HR2.152.05.01.M.CI
HR2.152.05.02.D.CI, HR2.152.05.02.M.CI
HR2.152.05.03.D.CI, HR2.152.05.03.M.CI
HR2.152.10.01.D.CI, HR2.152.10.01.M.CI
HR2.152.10.02.D.CI, HR2.152.10.02.M.CI
HR2.152.10.03.D.CI, HR2.152.10.03.M.CI
UE3.152.02.01.D.CI, UE3.152.02.01.M.CI
UE3.152.02.02.D.CI, UE3.152.02.02.M.CI
UE3.152.02.03.D.CI, UE3.152.02.03.M.CI
UE3.152.03.01.D.CI, UE3.152.03.01.M.CI
UE3.152.03.02.D.CI, UE3.152.03.02.M.CI
UE3.152.03.03.D.CI, UE3.152.03.03.M.CI
UE3.152.09.01.D.CI, UE3.152.09.01.M.CI
UE3.152.09.02.D.CI, UE3.152.09.02.M.CI
UE3.152.09.03.D.CI, UE3.152.09.03.M.CI
UE3.152.10.01.D.CI, UE3.152.10.01.M.CI
UE3.152.10.02.D.CI, UE3.152.10.02.M.CI
UE3.152.10.03.D.CI, UE3.152.10.03.M.CI
|
Deep Learning Notebooks/1. Neural Networks and Deep Learning/2+Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
|
###Markdown
Logistic Regression with a Neural Network mindsetWelcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.**Instructions:**- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.**You will learn to:**- Build the general architecture of a learning algorithm, including: - Initializing parameters - Calculating the cost function and its gradient - Using an optimization algorithm (gradient descent) - Gather all three functions above into a main model function, in the right order. 1 - Packages First, let's run the cell below to import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.- [matplotlib](http://matplotlib.org) is a famous library to plot graphs in Python.- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.
###Code
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
###Output
_____no_output_____
###Markdown
2 - Overview of the Problem set **Problem Statement**: You are given a dataset ("data.h5") containing: - a training set of m_train images labeled as cat (y=1) or non-cat (y=0) - a test set of m_test images labeled as cat or non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.Let's get more familiar with the dataset. Load the data by running the following code.
###Code
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
###Output
_____no_output_____
###Markdown
We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the `index` value and re-run to see other images.
###Code
# Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
###Output
y = [1], it's a 'cat' picture.
###Markdown
Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. **Exercise:** Find the values for: - m_train (number of training examples) - m_test (number of test examples) - num_px (= height = width of a training image)Remember that `train_set_x_orig` is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access `m_train` by writing `train_set_x_orig.shape[0]`.
###Code
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
###Output
Number of training examples: m_train = 209
Number of testing examples: m_test = 50
Height/Width of each image: num_px = 64
Each image is of size: (64, 64, 3)
train_set_x shape: (209, 64, 64, 3)
train_set_y shape: (1, 209)
test_set_x shape: (50, 64, 64, 3)
test_set_y shape: (1, 50)
###Markdown
**Expected Output for m_train, m_test and num_px**: **m_train** 209 **m_test** 50 **num_px** 64 For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $*$ num_px $*$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.**Exercise:** Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num\_px $*$ num\_px $*$ 3, 1).A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$*$c$*$d, a) is to use: ```pythonX_flatten = X.reshape(X.shape[0], -1).T X.T is the transpose of X```
###Code
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0],-1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0],-1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
###Output
train_set_x_flatten shape: (12288, 209)
train_set_y shape: (1, 209)
test_set_x_flatten shape: (12288, 50)
test_set_y shape: (1, 50)
sanity check after reshaping: [17 31 56 22 33]
###Markdown
**Expected Output**: **train_set_x_flatten shape** (12288, 209) **train_set_y shape** (1, 209) **test_set_x_flatten shape** (12288, 50) **test_set_y shape** (1, 50) **sanity check after reshaping** [17 31 56 22 33] To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel). Let's standardize our dataset.
###Code
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
###Output
_____no_output_____
###Markdown
**What you need to remember:**Common steps for pre-processing a new dataset are:- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)- "Standardize" the data 3 - General Architecture of the learning algorithm It's time to design a simple algorithm to distinguish cat images from non-cat images.You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why **Logistic Regression is actually a very simple Neural Network!****Mathematical expression of the algorithm**:For one example $x^{(i)}$:$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$ $$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$The cost is then computed by summing over all training examples:$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$**Key steps**:In this exercise, you will carry out the following steps: - Initialize the parameters of the model - Learn the parameters for the model by minimizing the cost - Use the learned parameters to make predictions (on the test set) - Analyse the results and conclude 4 - Building the parts of our algorithm The main steps for building a Neural Network are:1. Define the model structure (such as number of input features) 2. Initialize the model's parameters3. Loop: - Calculate current loss (forward propagation) - Calculate current gradient (backward propagation) - Update parameters (gradient descent)You often build 1-3 separately and integrate them into one function we call `model()`. 4.1 - Helper functions**Exercise**: Using your code from "Python Basics", implement `sigmoid()`. As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().
###Code
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + np.exp(-z))
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
###Output
sigmoid([0, 2]) = [ 0.5 0.88079708]
###Markdown
**Expected Output**: **sigmoid([0, 2])** [ 0.5 0.88079708] 4.2 - Initializing parameters**Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
###Code
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
"""
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
"""
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros((dim,1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
###Output
w = [[ 0.]
[ 0.]]
b = 0
###Markdown
**Expected Output**: ** w ** [[ 0.] [ 0.]] ** b ** 0 For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1). 4.3 - Forward and Backward propagationNow that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.**Exercise:** Implement a function `propagate()` that computes the cost function and its gradient.**Hints**:Forward Propagation:- You get X- You compute $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$Here are the two formulas you will be using: $$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
###Code
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
"""
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot(w.T,X) +b) # compute activation
cost = ((-1/m)*(np.sum((Y*np.log(A) + (1-Y)*np.log(1-A))))) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = (1/m)*np.dot(X,(A-Y).T)
db = (1/m)*np.sum(A-Y)
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1],[2]]), 2, np.array([[1,2],[3,4]]), np.array([[1,0]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
###Output
dw = [[ 0.99993216]
[ 1.99980262]]
db = 0.499935230625
cost = 6.00006477319
###Markdown
**Expected Output**: ** dw ** [[ 0.99993216] [ 1.99980262]] ** db ** 0.499935230625 ** cost ** 6.000064773192205 d) Optimization- You have initialized your parameters.- You are also able to compute a cost function and its gradient.- Now, you want to update the parameters using gradient descent.**Exercise:** Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
###Code
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
"""
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
"""
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate*dw
b = b - learning_rate*db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training examples
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
###Output
w = [[ 0.1124579 ]
[ 0.23106775]]
b = 1.55930492484
dw = [[ 0.90158428]
[ 1.76250842]]
db = 0.430462071679
###Markdown
**Expected Output**: **w** [[ 0.1124579 ] [ 0.23106775]] **b** 1.55930492484 **dw** [[ 0.90158428] [ 1.76250842]] **db** 0.430462071679 **Exercise:** The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the `predict()` function. There is two steps to computing predictions:1. Calculate $\hat{Y} = A = \sigma(w^T X + b)$2. Convert the entries of a into 0 (if activation 0.5), stores the predictions in a vector `Y_prediction`. If you wish, you can use an `if`/`else` statement in a `for` loop (though there is also a way to vectorize this).
###Code
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
### w = w.reshape(X.shape)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T,X) + b)
### END CODE HERE ###
Y_prediction = np.around(A)
### for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
### pass
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
print ("predictions = " + str(predict(w, b, X)))
###Output
predictions = [[ 1. 1.]]
###Markdown
**Expected Output**: **predictions** [[ 1. 1.]] **What to remember:**You've implemented several functions that:- Initialize (w,b)- Optimize the loss iteratively to learn parameters (w,b): - computing the cost and its gradient - updating the parameters using gradient descent- Use the learned (w,b) to predict the labels for a given set of examples 5 - Merge all functions into a model You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.**Exercise:** Implement the model function. Use the following notation: - Y_prediction for your predictions on the test set - Y_prediction_train for your predictions on the train set - w, costs, grads for the outputs of optimize()
###Code
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
"""
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
"""
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = np.zeros((X_train.shape[0],1)), 0
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w,b,X_test)
Y_prediction_train = predict(w,b,X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
###Output
_____no_output_____
###Markdown
Run the following cell to train your model.
###Code
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
###Output
Cost after iteration 0: 0.693147
Cost after iteration 100: 0.584508
Cost after iteration 200: 0.466949
Cost after iteration 300: 0.376007
Cost after iteration 400: 0.331463
Cost after iteration 500: 0.303273
Cost after iteration 600: 0.279880
Cost after iteration 700: 0.260042
Cost after iteration 800: 0.242941
Cost after iteration 900: 0.228004
Cost after iteration 1000: 0.214820
Cost after iteration 1100: 0.203078
Cost after iteration 1200: 0.192544
Cost after iteration 1300: 0.183033
Cost after iteration 1400: 0.174399
Cost after iteration 1500: 0.166521
Cost after iteration 1600: 0.159305
Cost after iteration 1700: 0.152667
Cost after iteration 1800: 0.146542
Cost after iteration 1900: 0.140872
train accuracy: 99.04306220095694 %
test accuracy: 70.0 %
###Markdown
**Expected Output**: **Train Accuracy** 99.04306220095694 % **Test Accuracy** 70.0 % **Comment**: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the `index` variable) you can look at predictions on pictures of the test set.
###Code
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
###Output
/opt/conda/lib/python3.5/site-packages/ipykernel/__main__.py:4: DeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
###Markdown
Let's also plot the cost function and the gradients.
###Code
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
###Output
_____no_output_____
###Markdown
**Interpretation**:You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting. 6 - Further analysis (optional/ungraded exercise) Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$. Choice of learning rate **Reminder**:In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the `learning_rates` variable to contain, and see what happens.
###Code
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
###Output
learning rate is: 0.01
train accuracy: 99.52153110047847 %
test accuracy: 68.0 %
-------------------------------------------------------
learning rate is: 0.001
train accuracy: 88.99521531100478 %
test accuracy: 64.0 %
-------------------------------------------------------
learning rate is: 0.0001
train accuracy: 68.42105263157895 %
test accuracy: 36.0 %
-------------------------------------------------------
###Markdown
**Interpretation**: - Different learning rates give different costs and thus different predictions results.- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost). - A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.- In deep learning, we usually recommend that you: - Choose the learning rate that better minimizes the cost function. - If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.) 7 - Test with your own image (optional/ungraded exercise) Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Change your image's name in the following code 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
###Code
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "photo_latest14.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
###Output
y = 0.0, your algorithm predicts a "non-cat" picture.
|
sequential_tracing/PostAnalysis/.ipynb_checkpoints/Part1_chr21_DomainAnalysis-checkpoint.ipynb
|
###Markdown
This a jupyter notebook guide on domain analysisby Pu Zheng and Bogdan Bintu2020.06.06 Import packages
###Code
# imports
import sys, os, glob, time, copy
import numpy as np
import scipy
import pickle
sys.path.append(os.path.abspath(r"..\."))
import source as ia
from scipy.signal import find_peaks
from scipy.spatial.distance import cdist,pdist,squareform
print(os.getpid())
###Output
_____no_output_____
###Markdown
Import plotting
###Code
# Required plotting setting
import matplotlib
matplotlib.rcParams['pdf.fonttype'] = 42
import matplotlib.pyplot as plt
plt.rc('font', family='serif')
plt.rc('font', serif='Arial')
_font_size = 7.5
# Required plotting parameters
from source.figure_tools import _dpi,_single_col_width,_double_col_width,_single_row_height,_ref_bar_length, _ticklabel_size,_ticklabel_width,_font_size
# figure folder
parent_figure_folder = r'\\10.245.74.158\Chromatin_NAS_4\Chromatin_Share\final_figures'
figure_folder = os.path.join(parent_figure_folder, 'Chr21_domain_figures')
print(figure_folder)
if not os.path.exists(figure_folder):
os.makedirs(figure_folder)
print("generating this folder")
###Output
_____no_output_____
###Markdown
0. Load data
###Code
data_folder = r'E:\Users\puzheng\Dropbox\2020 Chromatin Imaging Manuscript\Revision\DataForReviewers'
data_folder = r'C:\Users\Bogdan\Dropbox\2020 Chromatin Imaging Manuscript\Revision\DataForReviewers'
rep1_filename = os.path.join(data_folder, 'chromosome21.tsv')
rep2_filename = os.path.join(data_folder, 'chromosome21-cell_cycle.tsv')
###Output
_____no_output_____
###Markdown
0.1 load chr21 (replicate 1 - without cell cycle)
###Code
# load from file and extract info
import csv
rep1_info_dict = {}
with open(rep1_filename, 'r') as _handle:
_reader = csv.reader(_handle, delimiter='\t', quotechar='|')
_headers = next(_reader)
print(_headers)
# create keys for each header
for _h in _headers:
rep1_info_dict[_h] = []
# loop through content
for _contents in _reader:
for _h, _info in zip(_headers,_contents):
rep1_info_dict[_h].append(_info)
from tqdm import tqdm_notebook as tqdm
# clean up info
data_rep1 = {'params':{}}
# clean up genomic coordiantes
region_names = np.unique(rep1_info_dict['Genomic coordinate'])
region_starts = np.array([int(_n.split(':')[1].split('-')[0]) for _n in region_names])
region_ends = np.array([int(_n.split(':')[1].split('-')[1]) for _n in region_names])[np.argsort(region_starts)]
region_starts = np.sort(region_starts)
mid_positions = ((region_starts + region_ends)/2).astype(np.int)
mid_positions_Mb = np.round(mid_positions / 1e6, 2)
# clean up chrom copy number
chr_nums = np.array([int(_info) for _info in rep1_info_dict['Chromosome copy number']])
chr_ids, region_cts = np.unique(chr_nums, return_counts=True)
dna_zxys_list = [[[] for _start in region_starts] for _id in chr_ids]
# clean up zxy
for _z,_x,_y,_reg_info, _cid in tqdm(zip(rep1_info_dict['Z(nm)'],rep1_info_dict['X(nm)'],\
rep1_info_dict['Y(nm)'],rep1_info_dict['Genomic coordinate'],\
rep1_info_dict['Chromosome copy number'])):
# get chromosome inds
_cid = int(_cid)
_cind = np.where(chr_ids == _cid)[0][0]
# get region indices
_start = int(_reg_info.split(':')[1].split('-')[0])
_rind = np.where(region_starts==_start)[0][0]
dna_zxys_list[_cind][_rind] = np.array([float(_z),float(_x), float(_y)])
# merge together
dna_zxys_list = np.array(dna_zxys_list)
data_rep1['chrom_ids'] = chr_ids
data_rep1['mid_position_Mb'] = mid_positions_Mb
data_rep1['dna_zxys'] = dna_zxys_list
# clean up tss and transcription
if 'Gene names' in rep1_info_dict:
import re
# first extract number of genes
gene_names = []
for _gene_info, _trans_info, _tss_coord in zip(rep1_info_dict['Gene names'],
rep1_info_dict['Transcription'],
rep1_info_dict['TSS ZXY(nm)']):
if _gene_info != '':
# split by semicolon
_genes = _gene_info.split(';')[:-1]
for _gene in _genes:
if _gene not in gene_names:
gene_names.append(_gene)
print(f"{len(gene_names)} genes exist in this dataset.")
# initialize gene and transcription
tss_zxys_list = [[[] for _gene in gene_names] for _id in chr_ids]
transcription_profiles = [[[] for _gene in gene_names] for _id in chr_ids]
# loop through to get info
for _cid, _gene_info, _trans_info, _tss_locations in tqdm(zip(rep1_info_dict['Chromosome copy number'],
rep1_info_dict['Gene names'],
rep1_info_dict['Transcription'],
rep1_info_dict['TSS ZXY(nm)'])):
# get chromosome inds
_cid = int(_cid)
_cind = np.where(chr_ids == _cid)[0][0]
# process if there are genes in this region:
if _gene_info != '':
# split by semicolon
_genes = _gene_info.split(';')[:-1]
_transcribes = _trans_info.split(';')[:-1]
_tss_zxys = _tss_locations.split(';')[:-1]
for _gene, _transcribe, _tss_zxy in zip(_genes, _transcribes, _tss_zxys):
# get gene index
_gind = gene_names.index(_gene)
# get transcription profile
if _transcribe == 'on':
transcription_profiles[_cind][_gind] = True
else:
transcription_profiles[_cind][_gind] = False
# get coordinates
_tss_zxy = np.array([np.float(_c) for _c in re.split(r'\s+', _tss_zxy.split('[')[1].split(']')[0]) if _c != ''])
tss_zxys_list[_cind][_gind] = _tss_zxy
tss_zxys_list = np.array(tss_zxys_list)
transcription_profiles = np.array(transcription_profiles)
data_rep1['gene_names'] = gene_names
data_rep1['tss_zxys'] = tss_zxys_list
data_rep1['trans_pfs'] = transcription_profiles
# clean up cell_cycle states
if 'Cell cycle state' in rep1_info_dict:
cell_cycle_types = np.unique(rep1_info_dict['Cell cycle state'])
cell_cycle_flag_dict = {_k:[[] for _id in chr_ids] for _k in cell_cycle_types if _k != 'ND'}
for _cid, _state in tqdm(zip(rep1_info_dict['Chromosome copy number'],rep1_info_dict['Cell cycle state'])):
# get chromosome inds
_cid = int(_cid)
_cind = np.where(chr_ids == _cid)[0][0]
if np.array([_v[_cind]==[] for _k,_v in cell_cycle_flag_dict.items()]).any():
for _k,_v in cell_cycle_flag_dict.items():
if _k == _state:
_v[_cind] = True
else:
_v[_cind] = False
# append to data
for _k, _v in cell_cycle_flag_dict.items():
data_rep1[f'{_k}_flags'] = np.array(_v)
data_rep1.keys()
###Output
_____no_output_____
###Markdown
1. population averaged description of chr21 1.1 (FigS1F) population average maps:median distance map, proximity frequency map, corresponding Hi-C map
###Code
zxys_rep1_list = np.array(data_rep1['dna_zxys'])
distmap_rep1_list = np.array([squareform(pdist(_zxy)) for _zxy in tqdm(zxys_rep1_list)])
# generate median distance map
median_distance_map_rep1 = np.nanmedian(distmap_rep1_list, axis = 0)
# generate contact map
contact_th = 500
contact_map_rep1 = np.nanmean(distmap_rep1_list < contact_th, axis=0)
# load Hi-C
hic_file = r'C:\Users\Bogdan\Dropbox\2020 Chromatin Imaging Manuscript\Revision\DataForReviewers\population-averaged\hi-c_contacts_chromosome21.tsv'
hic_txt = np.array([ln[:-1].split('\t')for ln in open(hic_file,'r') if len(ln)>1])
hic_raw_map = np.array(hic_txt[1:,1:],dtype=np.float)
from matplotlib.colors import LogNorm
median_limits = [0, 2000]
median_cmap = matplotlib.cm.get_cmap('seismic_r')
median_cmap.set_bad(color=[0.,0.,0.,1])
contact_limits = [0.05, 0.6]
contact_norm = LogNorm(vmin=np.min(contact_limits),
vmax=np.max(contact_limits))
contact_cmap = matplotlib.cm.get_cmap('seismic')
contact_cmap.set_bad(color=[0.,0.,0.,1])
hic_limits = [1, 400]
hic_norm = LogNorm(vmin=np.min(hic_limits),
vmax=np.max(hic_limits))
hic_cmap = matplotlib.cm.get_cmap('seismic')
hic_cmap.set_bad(color=[0.,0.,0.,1])
from source.figure_tools.distmap import plot_distance_map
print(figure_folder)
%matplotlib inline
distmap_ax = plot_distance_map(median_distance_map_rep1,
cmap=median_cmap,
color_limits=median_limits,
tick_labels=mid_positions_Mb,
ax_label='Genomic positions (Mb)',
colorbar_labels='Distances (nm)',
save=True, save_folder=figure_folder,
save_basename=f'FigS1F1_median_distmap_rep1.pdf',
font_size=5)
contact_ax = plot_distance_map(contact_map_rep1,
cmap=contact_cmap,
color_limits=contact_limits,
color_norm=contact_norm,
tick_labels=mid_positions_Mb,
ax_label='Genomic positions (Mb)',
colorbar_labels='Proximity frequency',
save=True, save_folder=figure_folder,
save_basename=f'FigS1F2_contact_map_rep1.pdf',
font_size=5)
hic_ax = plot_distance_map(hic_raw_map,
cmap=hic_cmap,
color_limits=hic_limits,
color_norm=hic_norm,
tick_labels=mid_positions_Mb,
ax_label='Genomic positions (Mb)',
colorbar_labels='Hi-C count',
save=True, save_folder=figure_folder,
save_basename=f'FigS1F3_hic_map.pdf',
font_size=5)
###Output
_____no_output_____
###Markdown
1.2 (S1G) correlation between median-distance vs. Hi-C
###Code
good_spot_flags = np.isnan(np.array(zxys_rep1_list)).sum(2)==0
failure_rates = 1 - np.mean(good_spot_flags, axis=0)
good_regions_rep1 = np.where(failure_rates < 0.25)[0]
print(len(good_regions_rep1))
kept_median_rep1 = median_distance_map_rep1[good_regions_rep1][:,good_regions_rep1]
kept_hic_rep1 = hic_raw_map[good_regions_rep1][:,good_regions_rep1]
wt_median_entries_rep1 = kept_median_rep1[np.triu_indices(len(kept_median_rep1),1)]
hic_contact_entries_rep1 = kept_hic_rep1[np.triu_indices(len(kept_hic_rep1),1)]
kept = (wt_median_entries_rep1>0) * (hic_contact_entries_rep1>0)
from scipy.stats import linregress, pearsonr
lr_rep1 = linregress(np.log(wt_median_entries_rep1[kept]),
np.log(hic_contact_entries_rep1[kept]))
print(lr_rep1)
print('pearson correlation:', np.abs(lr_rep1.rvalue))
xticks = np.round(2**np.linspace(-2,1,4)*1000,0).astype(np.int)
yticks = np.logspace(0, 4, 3).astype(np.int)
xlim = [200,2200]
# draw scatter plot
fig, ax = plt.subplots(figsize=(_single_col_width, _single_col_width), dpi=_dpi)
ax.plot(wt_median_entries_rep1[kept], hic_contact_entries_rep1[kept], '.', color='gray', alpha=0.3, markersize=1, )
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xticks(xticks, minor=False)
ax.set_xticklabels(xticks)
ax.tick_params('both', labelsize=_font_size,
width=_ticklabel_width, length=_ticklabel_size,
pad=1)
[i[1].set_linewidth(_ticklabel_width) for i in ax.spines.items()]
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_xlabel('Median distances (nm)', labelpad=2, fontsize=_font_size+1)
ax.set_ylabel('Hi-C counts', labelpad=2, fontsize=_font_size+1)
ax.set_xlim(xlim)
ax.set_ylim([0.9,10000])
#ax.set_yticks(yticks, minor=True)
#ax.set_yticklabels(yticks)
reg_x = np.linspace(250, 2000, 100)
reg_y = np.exp( lr_rep1.slope * np.log(reg_x) + lr_rep1.intercept)
ax.plot(reg_x, reg_y, 'r', label=f'slope = {lr_rep1.slope:.2f}\n\u03C1 = {lr_rep1.rvalue:.2f}')
plt.legend(loc='upper right', fontsize=_font_size-1)
plt.gcf().subplots_adjust(bottom=0.15, left=0.15)
plt.savefig(os.path.join(figure_folder, 'FigS1G_scatter_median_hic_rep1.pdf'), transparent=True)
plt.show()
###Output
_____no_output_____
###Markdown
Zoomed in correlation
###Code
limits = [325, 390]
crop = slice(limits[0],limits[1])
good_crop_regions_rep1 = np.array([_r for _r in good_regions_rep1 if _r in np.arange(limits[0], limits[1])], dtype=np.int)
kept_crop_median_rep1 = median_distance_map_rep1[good_crop_regions_rep1][:,good_crop_regions_rep1]
kept_crop_hic_rep1 = hic_raw_map[good_crop_regions_rep1][:,good_crop_regions_rep1]
wt_crop_median_entries_rep1 = kept_crop_median_rep1[np.triu_indices(len(kept_crop_median_rep1),1)]
hic_crop_contact_entries_rep1 = kept_crop_hic_rep1[np.triu_indices(len(kept_crop_hic_rep1),1)]
kept_crop_rep1 = (wt_crop_median_entries_rep1>0) * (hic_crop_contact_entries_rep1>0)
from scipy.stats import linregress, pearsonr
lr_crop_rep1 = linregress(np.log(wt_crop_median_entries_rep1[kept_crop_rep1]),
np.log(hic_crop_contact_entries_rep1[kept_crop_rep1]))
print(lr_crop_rep1)
print('pearson correlation:', np.abs(lr_crop_rep1.rvalue))
# Plot
xticks = np.round(2**np.linspace(-2,1,4)*1000,0).astype(np.int)
yticks = np.logspace(0, 4, 3).astype(np.int)
xlim = [160,1700]
# draw scatter plot
fig, ax = plt.subplots(figsize=(_single_col_width, _single_col_width), dpi=_dpi)
ax.plot(wt_crop_median_entries_rep1[kept_crop_rep1],
hic_crop_contact_entries_rep1[kept_crop_rep1], '.', color='gray', alpha=0.3, markersize=1, )
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xticks(xticks, minor=False)
ax.set_xticklabels(xticks)
ax.tick_params('both', labelsize=_font_size,
width=_ticklabel_width, length=_ticklabel_size,
pad=1)
[i[1].set_linewidth(_ticklabel_width) for i in ax.spines.items()]
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.set_xlabel('Median distances (nm)', labelpad=2, fontsize=_font_size+1)
ax.set_ylabel('Hi-C counts', labelpad=2, fontsize=_font_size+1)
reg_x = np.linspace(250, 2000, 100)
reg_y = np.exp( lr_crop_rep1.slope * np.log(reg_x) + lr_crop_rep1.intercept)
ax.set_xlim(xlim)
ax.set_ylim([0.9,10000])
ax.plot(reg_x, reg_y, 'r', label=f'slope = {lr_crop_rep1.slope:.2f}\n\u03C1 = {lr_crop_rep1.rvalue:.2f}')
plt.legend(loc='upper right', fontsize=_font_size-1)
plt.gcf().subplots_adjust(bottom=0.15, left=0.15)
plt.savefig(os.path.join(figure_folder, f'FigS1I_scatter_median_hic_{limits}.pdf'), transparent=True)
plt.show()
###Output
_____no_output_____
###Markdown
Determine best correlation bewteen contact and Hi-C
###Code
# generate contact maps
contact_map_dict_rep1 = {}
thr_list = [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000]
for _thr in thr_list:
print(_thr)
# new way to calculate contact
contact_map_dict_rep1[_thr] = np.sum(distmap_rep1_list<_thr, axis=0) / np.sum(np.isnan(distmap_rep1_list)==False, axis=0)
# calculate pearson correlation with Hi-C
pearson_corr_list_rep1 = []
for _thr in thr_list:
_contact_map = contact_map_dict_rep1[_thr]
good_spot_flags = np.isnan(np.array(zxys_rep1_list)).sum(2)==0
failure_rates = 1 - np.mean(good_spot_flags, axis=0)
good_regions = np.where(failure_rates < 0.25)[0]
#print(len(good_regions))
kept_contacts = _contact_map[good_regions][:,good_regions]
kept_hic = hic_raw_map[good_regions][:,good_regions]
wt_contact_entries = kept_contacts[np.triu_indices(len(kept_contacts),1)]
hic_contact_entries = kept_hic[np.triu_indices(len(kept_hic),1)]
kept = (wt_contact_entries>0) * (hic_contact_entries>0)
from scipy.stats import linregress, pearsonr
lr = linregress(np.log(wt_contact_entries[kept]),
np.log(hic_contact_entries[kept]))
print(_thr, 'nm; pearson correlation:', np.abs(lr.rvalue))
pearson_corr_list_rep1.append(lr.rvalue)
%matplotlib inline
fig, ax = plt.subplots(figsize=(_single_col_width, _single_col_width), dpi=600)
ax.plot(thr_list, pearson_corr_list_rep1, linewidth=1, alpha=0.7, marker ='.')
ax.tick_params('both', labelsize=_font_size,
width=_ticklabel_width, length=_ticklabel_size-1,
pad=1, labelleft=True) # remove bottom ticklabels for ax
[i[1].set_linewidth(_ticklabel_width) for i in ax.spines.items()]
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_ylim([0.725,0.9])
ax.set_xlim([0,1050])
ax.set_xticks(np.arange(0,1001,200))
ax.set_yticks(np.arange(0.7,0.91,0.05))
ax.set_xlabel("Cutoff threshold (nm)", fontsize=_font_size, labelpad=1)
ax.set_ylabel("Pearson correlation with Hi-C", fontsize=_font_size, labelpad=1)
plt.gcf().subplots_adjust(bottom=0.15, left=0.16)
plt.savefig(os.path.join(figure_folder, f'FigS1J_chr21_proximity_hic_pearson_with_thresholds_rep1.pdf'), transparent=True)
plt.show()
###Output
_____no_output_____
###Markdown
2. Analysis for single-cell domains 2.1.1 find single-cell domains
###Code
import source.domain_tools.DomainAnalysis as da
import multiprocessing as mp
num_threads=32
domain_corr_cutoff = 0.75
domain_dist_cutoff = 500 # nm
_domain_args = [(_zxys, 4, 1000, domain_corr_cutoff, domain_dist_cutoff)
for _zxys in data_rep1['dna_zxys']]
_domain_time = time.time()
print(f"Multiprocessing call domain starts", end=' ')
if 'domain_starts' not in data_rep1:
with mp.Pool(num_threads) as domain_pool:
domain_results = domain_pool.starmap(da.get_dom_starts_cor, _domain_args)
domain_pool.close()
domain_pool.join()
domain_pool.terminate()
# save
data_rep1['domain_starts'] = [np.array(_r[-1]) for _r in domain_results]
data_rep1['params']['domain_corr_cutoff'] = domain_corr_cutoff
data_rep1['params']['domain_dist_cutoff'] = domain_dist_cutoff
print(f"in {time.time()-_domain_time:.3f}s.")
###Output
_____no_output_____
###Markdown
2.1 Add noise of 100nm and re-find single-cell domains
###Code
from copy import deepcopy
data_noise = deepcopy(data_rep1)
del(data_noise['domain_starts'])
data_noise['dna_zxys']+=np.random.normal(scale=100/1.6,size=data_noise['dna_zxys'].shape)
dist_dif = np.linalg.norm(data_rep1['dna_zxys']-data_noise['dna_zxys'],axis=-1)
print("Displacement error:",np.nanmean(dist_dif))
import source.domain_tools.DomainAnalysis as da
import multiprocessing as mp
num_threads=32
domain_corr_cutoff = 0.75
domain_dist_cutoff = 500 # nm
_domain_args = [(_zxys, 4, 1000, domain_corr_cutoff, domain_dist_cutoff)
for _zxys in data_noise['dna_zxys']]
_domain_time = time.time()
print(f"Multiprocessing call domain starts", end=' ')
if 'domain_starts' not in data_noise:
with mp.Pool(num_threads) as domain_pool:
domain_results = domain_pool.starmap(da.get_dom_starts_cor, _domain_args)
domain_pool.close()
domain_pool.join()
domain_pool.terminate()
# save
data_noise['domain_starts'] = [np.array(_r[-1]) for _r in domain_results]
data_noise['params']['domain_corr_cutoff'] = domain_corr_cutoff
data_noise['params']['domain_dist_cutoff'] = domain_dist_cutoff
print(f"in {time.time()-_domain_time:.3f}s.")
###Output
_____no_output_____
###Markdown
2.2 Genomic size and radius of gyration
###Code
# genomic sizes
region_size = 0.05 # Mb
rep1_sz_list = []
for _zxys, _dm_starts in zip(data_rep1['dna_zxys'],data_rep1['domain_starts']):
_starts = _dm_starts[:-1]
_ends = _dm_starts[1:]
# sizes
_sizes = (_dm_starts[1:] - _dm_starts[:-1]) * region_size
# append
rep1_sz_list.append(_sizes)
noise_sz_list = []
for _zxys, _dm_starts in zip(data_noise['dna_zxys'],data_noise['domain_starts']):
_starts = _dm_starts[:-1]
_ends = _dm_starts[1:]
# sizes
_sizes = (_dm_starts[1:] - _dm_starts[:-1]) * region_size
# append
noise_sz_list.append(_sizes)
%matplotlib inline
fig, ax = plt.subplots(figsize=(_single_col_width, _single_col_width), dpi=600)
ax.hist(np.concatenate(noise_sz_list), 100, range=(0,5),
density=True, color='k', alpha=1, label=f'median={np.nanmedian(np.concatenate(rep1_sz_list)):.2f}Mb')
ax.hist(np.concatenate(rep1_sz_list), 100, range=(0,5),
density=True, color='g', alpha=0.5, label=f'median={np.nanmedian(np.concatenate(rep1_sz_list)):.2f}Mb')
ax.tick_params('both', labelsize=_font_size,
width=_ticklabel_width, length=_ticklabel_size-1,
pad=1, labelleft=True) # remove bottom ticklabels for ax
[i[1].set_linewidth(_ticklabel_width) for i in ax.spines.items()]
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
#ax.legend(fontsize=_font_size-1)
ax.set_xlabel("Genomic size (Mb)", labelpad=1, fontsize=_font_size)
ax.set_ylabel("Probability density", labelpad=1, fontsize=_font_size)
ax.set_title("Chr21 domain genomic size", pad=2, fontsize=_font_size)
plt.gcf().subplots_adjust(bottom=0.15, left=0.16)
save_file = os.path.join(figure_folder, f'Fig1I_chr21_domain_gsize_hist_rep1.pdf')
plt.savefig(save_file, transparent=True)
print(save_file)
plt.show()
def rg_mean(zxy):
"""computes radius of gyration"""
zxy_ = np.array(zxy)
zxy_ = zxy_[~np.isnan(zxy_[:,0])]
zxy_ = zxy_ - np.mean(zxy_,0)
return np.sqrt(np.mean(np.sum(zxy_**2,axis=-1)))
# radius of gyrations
rep1_rg_list = []
for _zxys, _dm_starts in zip(data_rep1['dna_zxys'],data_rep1['domain_starts']):
_starts = _dm_starts[:-1]
_ends = _dm_starts[1:]
# rgs
_rgs = np.array([rg_mean(_zxys[_s:_e]) for _s, _e in zip(_starts, _ends)])
# append
rep1_rg_list.append(_rgs)
# radius of gyration for noise
noise_rg_list = []
for _zxys, _dm_starts in zip(data_noise['dna_zxys'],data_noise['domain_starts']):
_starts = _dm_starts[:-1]
_ends = _dm_starts[1:]
# rgs
_rgs = np.array([rg_mean(_zxys[_s:_e]) for _s, _e in zip(_starts, _ends)])
# append
noise_rg_list.append(_rgs)
fig, ax = plt.subplots(figsize=(_single_col_width, _single_col_width), dpi=600)
ax.hist(np.concatenate(noise_rg_list), 100, range=(0,1500),
density=True, color='k', alpha=1, label=f'median={np.nanmedian(np.concatenate(rep1_rg_list)):.0f}nm')
ax.hist(np.concatenate(rep1_rg_list), 100, range=(0,1500),
density=True, color='g', alpha=0.6, label=f'median={np.nanmedian(np.concatenate(rep1_rg_list)):.0f}nm')
ax.tick_params('both', labelsize=_font_size-1,
width=_ticklabel_width, length=_ticklabel_size-1,
pad=1, labelleft=True) # remove bottom ticklabels for ax
[i[1].set_linewidth(_ticklabel_width) for i in ax.spines.items()]
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
#ax.legend(fontsize=_font_size-1)
ax.set_xlabel("Radius of gyration (nm)", labelpad=1, fontsize=_font_size)
ax.set_ylabel("Probability density", labelpad=1, fontsize=_font_size)
ax.set_title("Chr21 domain radius of gyration", pad=2, fontsize=_font_size)
plt.gcf().subplots_adjust(bottom=0.15, left=0.16)
save_file = os.path.join(figure_folder, f'Fig1J_chr21_domain_RG_hist_rep1.pdf')
plt.savefig(save_file, transparent=True)
print(save_file)
plt.show()
###Output
_____no_output_____
###Markdown
2.3 Single-cell boundary probability, insulation and alignment with CTCF/TADs
###Code
dom_starts_fs = data_rep1['domain_starts']
zxys = data_rep1['dna_zxys']
pts= zxys
###Output
_____no_output_____
###Markdown
calculate boundary probability
###Code
dom_all = np.array([dom for doms in dom_starts_fs[::] for dom in doms[1:-1]])
unk_,cts_=np.unique(dom_all,return_counts=True)
cts = np.zeros(len(pts[0]))
cts[unk_]=cts_
###Output
_____no_output_____
###Markdown
boundary probability for a zoom-in example
###Code
import matplotlib.pylab as plt
import numpy as np
import pickle,os
from mpl_toolkits.mplot3d import Axes3D
from scipy.spatial.distance import pdist,cdist,squareform
####### You will need cv2. If you do not have it, run: pip install opencv-python
import cv2
from matplotlib import cm
def resize(im__,scale_percent = 100):
width = int(im__.shape[1] * scale_percent / 100)
height = int(im__.shape[0] * scale_percent / 100)
dim = (width, height)
resized = cv2.resize(im__, dim, interpolation = cv2.INTER_NEAREST)
return resized
def rotate_bound(image, angle):
# grab the dimensions of the image and then determine the
# center
(h, w) = image.shape[:2]
(cX, cY) = (w // 2, h // 2)
# grab the rotation matrix (applying the negative of the
# angle to rotate clockwise), then grab the sine and cosine
# (i.e., the rotation components of the matrix)
M = cv2.getRotationMatrix2D((cX, cY), -angle, 1.0)
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
# compute the new bounding dimensions of the image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
# adjust the rotation matrix to take into account translation
M[0, 2] += (nW / 2) - cX
M[1, 2] += (nH / 2) - cY
# perform the actual rotation and return the image
return cv2.warpAffine(image, M, (nW, nH),cv2.INTER_NEAREST)
def interp1dnan(A):
A_=np.array(A)
ok = np.isnan(A)==False
xp = ok.nonzero()[0]
fp = A[ok]
x = np.isnan(A).nonzero()[0]
A_[np.isnan(A)] = np.interp(x, xp, fp)
return A_
def interpolate_chr(_chr):
"""linear interpolate chromosome coordinates"""
_new_chr = np.array(_chr)
for i in range(_new_chr.shape[-1]):
_new_chr[:,i]=interp1dnan(_new_chr[:,i])
return _new_chr
from mpl_toolkits.axes_grid1 import ImageGrid
from matplotlib import cm
fig = plt.figure(figsize=(20,20))
grid = ImageGrid(fig, 111, nrows_ncols=(4, 1),axes_pad=0.)
mat_ = np.log(contact_map_rep1)
pad=0
min_val,max_val = -2,None # the minimum and maximum distance in nanometers. this sets the threshold of the image
if max_val is None: max_val = np.nanmax(mat_)
if min_val is None: min_val = np.nanmin(mat_)
#This colors the image
im_ = (np.clip(mat_,min_val,max_val)-min_val)/(max_val-min_val)
im__ = np.array(cm.seismic(im_)[:,:,:3]*255,dtype=np.uint8)
# resize image 10x to get good resolution
resc = 10############
resized = resize(im__,resc*100)
# Rotate 45 degs
resized = rotate_bound(resized,-45)
start = int(pad*np.sqrt(2)*resc)
center = int(resized.shape[1]/2)
#Clip it to the desired size
padup=30##### how much of the matrix to keep in the up direction
resized = resized[center-resc*padup:center+resc*padup]
#List of positions of CTCF and rad21 in chr21
#ctcf
ctcf = [ 9, 21, 33, 67, 73, 78, 139, 226, 231, 235, 242, 253, 256,
273, 284, 292, 301, 307, 339, 350, 355, 363, 366, 370, 373, 376,
381, 385, 390, 396, 402, 405, 410, 436, 440, 446, 456, 469, 472,
476, 482, 485, 488, 492, 500, 505, 508, 512, 520, 540, 543, 550,
554, 560, 565, 576, 580, 585, 589, 592, 595, 599, 602, 606, 615,
619, 622, 625, 628, 633, 636, 639]
# rad21
rad21=[ 21, 33, 67, 73, 139, 226, 231, 236, 242, 256, 273, 284, 292,
301, 305, 339, 350, 355, 363, 366, 370, 381, 386, 390, 396, 405,
410, 415, 436, 440, 446, 456, 469, 472, 482, 485, 492, 500, 505,
508, 512, 543, 550, 554, 560, 576, 581, 585, 589, 593, 596, 599,
602, 615, 619, 622, 625, 628, 633, 636]
start = 0
min__ = 0
cts_perc = 1.*cts/len(pts)*100*resc
x_vals = (np.arange(len(cts_perc))-min__)*resc*np.sqrt(2)-start
#grid[1].imshow(A_.T,cmap='bwr')
grid[1].plot(x_vals,cts_perc,'ko-')
grid[0].imshow(resized)
grid[2].plot(x_vals[ctcf],[0]*len(ctcf),'^',color='orange',mec='k')
grid[3].plot(x_vals[rad21],[0]*len(rad21),'^',color='yellow',mec='k')
ypad=20
grid[2].set_ylim([-ypad,ypad])
grid[3].set_ylim([-ypad,ypad])
grid[2].set_yticks([])
grid[3].set_yticks([])
#grid[1].set_yticks([])
#grid[1].set_ylabel('AB ',rotation='horizontal')
grid[2].set_ylabel('CTCF ',rotation='horizontal')
grid[3].set_ylabel('RAD21 ',rotation='horizontal')
min_,max_ = (282, 480)
grid[0].set_xlim([min_*resc*np.sqrt(2),max_*resc*np.sqrt(2)])
plt.savefig(os.path.join(figure_folder,
f'Fig1C_chr21_sc-domain_prob_rep1.pdf'), transparent=True)
plt.show()
###Output
_____no_output_____
###Markdown
Calculate TADs
###Code
#median_distance_map_rep1
#contact_map_dict_rep1
zxys_noise_list = np.array(data_noise['dna_zxys'])
distmap_noise_list = np.array([squareform(pdist(_zxy)) for _zxy in tqdm(zxys_noise_list)])
# generate median distance map
median_distance_map_noise = np.nanmedian(distmap_noise_list, axis = 0)
# generate contact map
contact_th = 500
contact_map_noise = np.nanmean(distmap_noise_list < contact_th, axis=0)
from source.domain_tools.distance import _sliding_window_dist
from scipy.signal import find_peaks
distance_wd_dists = _sliding_window_dist(median_distance_map_rep1, _wd=8,
_dist_metric='normed_insulation')
distance_wd_dists_ = _sliding_window_dist(median_distance_map_noise, _wd=8,
_dist_metric='normed_insulation')
distance_peaks = find_peaks(-distance_wd_dists_, distance=5, prominence=0.013, width=3)
fig = plt.figure(figsize=(9,2),dpi=200)
plt.plot(-distance_wd_dists_, color='k', label='simulation', linewidth=1)
for _p in distance_peaks[0]:
plt.vlines(_p, 0, 1, linewidth=0.5, linestyles="dotted",color='k' )
distance_peaks = find_peaks(-distance_wd_dists, distance=5, prominence=0.013, width=3)
plt.plot(-distance_wd_dists, color=[1,0.4,0], label='data', linewidth=1)
for _p in distance_peaks[0]:
plt.vlines(_p, 0, 1, linewidth=0.5, linestyles="dotted",color='k' )
#plt.legend()
plt.ylim([0,0.5])
plt.xlim([0,651])
plt.xlabel('Genomic coordinate')
plt.ylabel('Insulation')
plt.show()
#folder_ = r'C:\Users\Bogdan\Dropbox\2020 Chromatin Imaging Manuscript\Revision\FinalFigures\Figure S1\100nmDisplacement_simmulation_newAnalysis'
#fig.savefig(folder_+os.sep+'TADsInsulation_medianDistance.pdf')
contact_wd_dists = _sliding_window_dist(contact_map_rep1, _wd=8, _dist_metric='normed_insulation')
contact_wd_dists_ = _sliding_window_dist(contact_map_noise, _wd=8, _dist_metric='normed_insulation')
from scipy.signal import find_peaks
fig = plt.figure(figsize=(9,2),dpi=200)
plt.plot(contact_wd_dists_,color='k',linewidth=2,label='100nm displaced loci')
plt.plot(contact_wd_dists,linewidth=1,color='g',label='original data')
contact_peaks = find_peaks(contact_wd_dists, distance=5, prominence=0.022, width=3)
for _p in contact_peaks[0]:
plt.vlines(_p, 0, 1, linewidth=0.5, linestyles="dotted", color='k')
contact_peaks_ = find_peaks(contact_wd_dists_, distance=5, prominence=0.022, width=3)
for _p in contact_peaks_[0]:
plt.vlines(_p, 0, 1, linewidth=0.5, linestyles="dotted", color='k')
plt.legend()
plt.ylim([0,0.5])
plt.xlim([0,651])
plt.xlabel('Genomic coordinate')
plt.ylabel('Insulation')
plt.show()
TADs = contact_peaks[0]
hic_wd_dists = _sliding_window_dist(hic_raw_map, _wd=8, _dist_metric='normed_insulation')
from scipy.signal import find_peaks
fig = plt.figure(figsize=(9,2),dpi=200)
plt.plot(hic_wd_dists,linewidth=1,color='r',label='original data')
hic_peaks = find_peaks(hic_wd_dists, distance=5, prominence=0.08, width=3)
for _p in hic_peaks[0]:
plt.vlines(_p, 0, 1, linewidth=0.5, linestyles="dotted", color='k')
plt.legend()
plt.ylim([0,1])
plt.xlim([0,651])
plt.xlabel('Genomic coordinate')
plt.ylabel('Insulation')
plt.show()
#List of positions of CTCF and rad21 in chr21
#ctcf
ctcf = [ 9, 21, 33, 67, 73, 78, 139, 226, 231, 235, 242, 253, 256,
273, 284, 292, 301, 307, 339, 350, 355, 363, 366, 370, 373, 376,
381, 385, 390, 396, 402, 405, 410, 436, 440, 446, 456, 469, 472,
476, 482, 485, 488, 492, 500, 505, 508, 512, 520, 540, 543, 550,
554, 560, 565, 576, 580, 585, 589, 592, 595, 599, 602, 606, 615,
619, 622, 625, 628, 633, 636, 639]
# rad21
rad21=[ 21, 33, 67, 73, 139, 226, 231, 236, 242, 256, 273, 284, 292,
301, 305, 339, 350, 355, 363, 366, 370, 381, 386, 390, 396, 405,
410, 415, 436, 440, 446, 456, 469, 472, 482, 485, 492, 500, 505,
508, 512, 543, 550, 554, 560, 576, 581, 585, 589, 593, 596, 599,
602, 615, 619, 622, 625, 628, 633, 636]
#A = [255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 494, 495, 496, 497, 498, 499, 500, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650]
#A_ = np.zeros(len(zxys[0])+2)
#A_[np.array(A)+1]=1
#AB_bds = np.abs(np.diff(A_))
#AB_bds = np.where(AB_bds)[0]
pts = data_rep1['dna_zxys']
import matplotlib
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['font.size']=15
matplotlib.rcParams['font.family']='Arial'
fig = plt.figure(figsize=(3,5))
bds_avg_ = TADs #from jun-han - 500nm#[20, 35, 52, 67, 80, 113, 139, 159, 179, 198, 213, 227, 254, 273, 298, 317, 340, 351, 365, 373, 388, 411, 439, 460, 471, 486, 507, 540, 550, 561, 575, 592, 604, 613, 627, 636, 644]
dmat = np.abs(np.array([[bd2-bd1 for bd1 in bds_avg_] for bd2 in np.arange(len(pts[0]))],dtype=int))
dmat = np.array([[bd2-bd1 for bd1 in bds_avg_] for bd2 in np.arange(len(pts[0]))],dtype=int)
range_ = range(-15,15)
yvec = np.array([np.median(cts[np.where(dmat==i)[0]]) for i in range_])
plt.plot(np.array(range_)*50,yvec/len(pts)*100,'o-',color=[0.6]*3,label='Domains')
#dmat = np.abs(np.array([[bd2-bd1 for bd1 in AB_bds] for bd2 in np.arange(len(pts[0]))],dtype=int))
#dmat = np.array([[bd2-bd1 for bd1 in AB_bds] for bd2 in np.arange(len(pts[0]))],dtype=int)
#yvec = np.array([np.median(cts[np.where(dmat==i)[0]]) for i in range_])
#plt.plot(np.array(range_)*50,yvec/len(pts)*100,'ko-',label='A/B compartments')
ctcf_rad21 = np.intersect1d(rad21,ctcf)
dmat = np.array([[bd2-bd1 for bd1 in ctcf_rad21] for bd2 in np.arange(len(pts[0]))],dtype=int)
yvec = np.array([np.median(cts[np.where(dmat==i)[0]]) for i in range_])
plt.plot(np.array(range_)*50,yvec/len(pts)*100,'o-',color='orange',label='CTCF&RAD21')
plt.yticks([5,7.5,10])
plt.ylim([4,14])
plt.xlabel('Genomic distance from boundary (kb)')
plt.ylabel('Single-cell \nboundary probability(%)')
plt.legend()
#folder_ = r'C:\Users\Bogdan\Dropbox\2020 Chromatin Imaging Manuscript\Revision\Bogdan_Figures\Figure1\base_images'
save_file = os.path.join(figure_folder, f'Fig1D_chr21_sc-domain_prob_ctcf_rep1.pdf')
print(save_file)
plt.savefig(save_file, transparent=True)
plt.show()
###Output
_____no_output_____
###Markdown
insulation scores w/wo ctcf
###Code
ichr=0
ins = []
bdr_ins = []
dom_starts_fs = data_rep1['domain_starts']
zxys = data_rep1['dna_zxys']
for dom_ in tqdm(dom_starts_fs):
zxy_ = zxys[ichr]
for idom in range(1,len(dom_)-3):
a,b,c = dom_[idom],dom_[idom+1],dom_[idom+2]
#a,b,c = dom_[idom+1]-5,dom_[idom+1],dom_[idom+1]+5
zxy1 = zxy_[a:b]
zxy2 = zxy_[b:c]
med_in = np.nanmedian(np.concatenate([pdist(zxy1),pdist(zxy2)]))
med_out = np.nanmedian(cdist(zxy1,zxy2))
ins_ = med_out/med_in
ins.append(ins_)
bdr_ins.append(b)
ichr+=1
bdr_ins=np.array(bdr_ins)
ins = np.array(ins)
nonctcf = np.ones(len(zxys[0]))
nonctcf[ctcf_rad21]=0
nonctcf = np.nonzero(nonctcf)[0]
fig = plt.figure()#figsize=(10,7))
bins=np.linspace(0,4,75)
plt.hist(ins[np.in1d(bdr_ins,nonctcf)],alpha=0.5,normed=True,bins=bins,color='gray',label = 'non-CTCF/RAD21')
plt.hist(ins[np.in1d(bdr_ins,ctcf_rad21)],alpha=0.5,normed=True,bins=bins,color='orange',label = 'CTCF/RAD21')
plt.xlabel('Boundary insulation score')
plt.ylabel('Probability density function')
plt.legend()
plt.savefig(os.path.join(figure_folder, f'Fig1K_chr21_sc-domain_insulation_ctcf_rep1.pdf'), transparent=True)
plt.show()
###Output
_____no_output_____
###Markdown
2.5 CTCF end-end distance and radii of gyration of CTCF bound domains
###Code
from tqdm import tqdm_notebook as tqdm
dic_ctcf = {}
dic_nonctcf = {}
def rg_med(zxy):
"""computes radius of gyration"""
zxy_ = np.array(zxy)
zxy_ = zxy_[~np.isnan(zxy_[:,0])]
zxy_ = zxy_ - np.mean(zxy_,0)
return np.sqrt(np.mean(np.sum(zxy_**2,axis=-1)))
def inD(zxy):
"""computest mean interdistance"""
return np.nanmean(pdist(zxy))
dic_rgctcf = {}
dic_rgnonctcf = {}
dic_inDctcf = {}
dic_inDnonctcf = {}
dic_withinDctcf = {}
dic_withinDnonctcf = {}
###################### This does not save each ctcf pair in its own key but already groups by genomic distance
ctcf_or_rad21 = np.union1d(ctcf,rad21)
for ichr in tqdm(range(len(zxys))):
doms=dom_starts_fs[ichr][1:-1]
zxy = zxys[ichr]
for i in range(len(doms)-1):
e1,e2 = doms[i],doms[i+1]-1
dist_ = np.linalg.norm(zxy[e1]-zxy[e2])
gen_dist = e2-e1
rg_ = rg_med(zxy[e1:e2])
inD_ = inD(zxy[e1:e2])
if (e1 in ctcf_or_rad21) and (e2 in ctcf_or_rad21):
dic_ctcf[gen_dist]=dic_ctcf.get(gen_dist,[])+[dist_]
dic_rgctcf[gen_dist]=dic_rgctcf.get(gen_dist,[])+[rg_]
dic_inDctcf[gen_dist]=dic_inDctcf.get(gen_dist,[])+[inD_]
if not np.any(np.in1d([e1,e1+1,e1-1,e2,e2+1,e2-1],ctcf_or_rad21)):
#if not np.any(np.in1d([e1,e2],ctcf_or_rad21)):
dic_nonctcf[gen_dist]=dic_nonctcf.get(gen_dist,[])+[dist_]
dic_rgnonctcf[gen_dist]=dic_rgnonctcf.get(gen_dist,[])+[rg_]
e1p = np.random.randint(e1+1,e2-1)
e2p = np.random.randint(e1p,e2-1)
if not np.any(np.in1d([e1p,e1p+1,e1p-1,e2p,e2p+1,e2p-1],ctcf_or_rad21)):
gen_dist__ = abs(e1p-e2p)
dist__ = np.linalg.norm(zxy[e1p]-zxy[e2p])
dic_withinDnonctcf[gen_dist__]=dic_withinDnonctcf.get(gen_dist__,[])+[dist__]
for e1p in range(e1+1,e2-1):
for e2p in range(e1p,e2-1):
if (e1p in ctcf_or_rad21) and (e2p in ctcf_or_rad21):
gen_dist__ = abs(e1p-e2p)
dist__ = np.linalg.norm(zxy[e1p]-zxy[e2p])
dic_withinDctcf[gen_dist__]=dic_withinDctcf.get(gen_dist__,[])+[dist__]
pickle.dump([dic_ctcf,dic_nonctcf,dic_rgctcf,dic_rgnonctcf,dic_inDctcf,dic_inDnonctcf,dic_withinDctcf,dic_withinDnonctcf],
open(r'C:\Users\Bogdan\Dropbox\Chromosome21_online\rg_and_edge-edge_distance_v2_repeat_testPu','wb'))
gen_dists = np.sort(list(dic_ctcf.keys()))
gen_dists = gen_dists[gen_dists<=28]
gen_dists = gen_dists[gen_dists>=4]
print([len(dic_nonctcf.get(gn,[])) for gn in gen_dists])
def boostrap_err2(x_,y_,func,N=1000,perc_min=5,perc_max=95):
elems = []
for istrap in range(N):
x__ = np.random.choice(x_,[len(x_)])
y__ = np.random.choice(y_,[len(y_)])
elems.append(func(x__,y__))
return (np.nanpercentile(elems,perc_min),np.nanpercentile(elems,perc_max))
gen_dists = np.sort(list(dic_ctcf.keys()))
gen_dists = gen_dists[gen_dists<=28]
gen_dists = gen_dists[gen_dists>=4]
def func(x,y): return np.nanmedian(x)/np.nanmedian(y)
xelems = gen_dists*50
meds_ctcf = [func(dic_ctcf.get(gn,[]),dic_withinDctcf.get(gn,[])) for gn in gen_dists]
errs_ctcf = np.abs(np.array([boostrap_err2(dic_ctcf.get(gn,[]),dic_withinDctcf.get(gn,[]),func)
for gn in tqdm(gen_dists)]).T-meds_ctcf)
xelems = gen_dists*50
meds_non = [func(dic_nonctcf.get(gn,[]),dic_withinDnonctcf.get(gn,[])) for gn in gen_dists]
errs_non = np.abs(np.array([boostrap_err2(dic_nonctcf.get(gn,[]),dic_withinDnonctcf.get(gn,[]),func)
for gn in tqdm(gen_dists)]).T-meds_non)
fig = plt.figure()
xelems = gen_dists*50
plt.errorbar(xelems,meds_ctcf,
yerr=errs_ctcf,
color='orange',mec='k',label='CTCF/cohesin domains',marker='o')
plt.errorbar(xelems,meds_non,
yerr=errs_non,
color='gray',mec='k',label='non-CTCF/cohesin domains',marker='o')
plt.legend()
plt.ylim([0.5,1.75])
plt.ylabel('Median edge distance/ \nMedian distance within domains')
plt.xlabel('Genomic distance (kb)')
#folder_ = r'C:\Users\Bogdan\Dropbox\2020 Chromatin Imaging Manuscript\Revision\FinalFigures\Figure 1\subpanels'
#fig.savefig(folder_+os.sep+r'Fig1L_new.pdf')
### Radius of gyration
def boostrap_err(x_,func,N=1000,perc_min=5,perc_max=95):
elems = []
for istrap in range(N):
elems.append(func(np.random.choice(x_,[len(x_)])))
return (np.nanpercentile(elems,perc_min),np.nanpercentile(elems,perc_max))
gen_dists = np.sort(list(dic_rgctcf.keys()))
gen_dists = gen_dists[gen_dists<=28]
gen_dists = gen_dists[gen_dists>=4]
func = np.nanmedian
xelems = gen_dists*50
meds_ctcf_rg = [func(dic_rgctcf[gn]) for gn in gen_dists]
errs_ctcf_rg = np.abs(np.array([boostrap_err(dic_rgctcf[gn],func) for gn in gen_dists]).T-meds_ctcf_rg)
xelems = gen_dists*50
meds_non_rg = [func(dic_rgnonctcf[gn]) for gn in gen_dists]
errs_non_rg = np.abs(np.array([boostrap_err(dic_rgnonctcf[gn],func) for gn in gen_dists]).T-meds_non_rg)
fig = plt.figure(figsize=(5,5))#figsize=(8,3))
plt.errorbar(xelems,meds_ctcf_rg,
yerr=errs_ctcf_rg,
color='orange',mec='k',label='CTCF/cohesin domains',marker='o')
plt.errorbar(xelems,meds_non_rg,
yerr=errs_non_rg,
color='gray',mec='k',label='non-CTCF/cohesin domains',marker='o')
plt.ylabel('Radius of gyration(nm)')
plt.xlabel('Genomic distance (kb)')
plt.legend()
save_file = os.path.join(figure_folder, f'Fig1M_chr21_domain_rg_ctcf_rep1.pdf')
plt.savefig(save_file, transparent=True)
print(save_file)
###Output
_____no_output_____
###Markdown
Radius of gyration vs genomic distance
###Code
dom_starts_fs = data_rep1['domain_starts']
zxys = data_rep1['dna_zxys'][:,:,1:]
pts=zxys
def rg_med(zxy):
"""computes radius of gyration"""
zxy_ = np.array(zxy)
zxy_ = zxy_[~np.isnan(zxy_[:,0])]
zxy_ = zxy_ - np.mean(zxy_,0)
return np.sqrt(np.mean(np.sum(zxy_**2,axis=-1)))
dic_rg = {}
for ichr in tqdm(range(len(pts))):
doms=dom_starts_fs[ichr][1:-1]
zxy = pts[ichr]
for i in range(len(doms)-1):
e1,e2 = doms[i],doms[i+1]-1
start = e1
end=e2#-1
rg_ = rg_med(zxy[start:end])
key = end-start
dic_rg[key] = dic_rg.get(key,[])+[rg_]
fig = plt.figure(figsize=(10,5))
keys = np.sort(list(dic_rg.keys()))
keys = keys[keys>=4]
plt.boxplot([dic_rg[gn] for gn in keys][:100-4], notch=True, showfliers=False,whis = [10, 90]);
xlab = np.arange(6)
plt.xticks((xlab-0.2)*1000/50,xlab);
plt.ylabel('Radius of gyration (nm)')
plt.xlabel('Genomic size of single-cell domains (Mb)')
save_file = os.path.join(figure_folder, f'FigS1N_chr21_domain_rgs_vs_genomic.pdf')
plt.savefig(save_file, transparent=True)
print(save_file)
###Output
_____no_output_____
###Markdown
Characterization domain and compartment in G1/G2-S cells Load data rep2
###Code
# load from file and extract info
import csv
rep2_info_dict = {}
with open(rep2_filename, 'r') as _handle:
_reader = csv.reader(_handle, delimiter='\t', quotechar='|')
_headers = next(_reader)
print(_headers)
# create keys for each header
for _h in _headers:
rep2_info_dict[_h] = []
# loop through content
for _contents in _reader:
for _h, _info in zip(_headers,_contents):
rep2_info_dict[_h].append(_info)
from tqdm import tqdm_notebook as tqdm
# clean up infoa
data_rep2 = {'params':{}}
# clean up genomic coordiantes
region_names = np.array([_n for _n in sorted(region_names, key=lambda s:int(s.split(':')[1].split('-')[0]))])
region_starts = np.array([int(_n.split(':')[1].split('-')[0]) for _n in region_names])
region_ends = np.array([int(_n.split(':')[1].split('-')[1]) for _n in region_names])[np.argsort(region_starts)]
region_starts = np.sort(region_starts)
mid_positions = ((region_starts + region_ends)/2).astype(np.int)
mid_positions_Mb = np.round(mid_positions / 1e6, 2)
# clean up chrom copy number
chr_nums = np.array([int(_info) for _info in rep2_info_dict['Chromosome copy number']])
chr_ids, region_cts = np.unique(chr_nums, return_counts=True)
dna_zxys_list = [[[] for _start in region_starts] for _id in chr_ids]
# clean up zxy
for _z,_x,_y,_reg_info, _cid in tqdm(zip(rep2_info_dict['Z(nm)'],rep2_info_dict['X(nm)'],\
rep2_info_dict['Y(nm)'],rep2_info_dict['Genomic coordinate'],\
rep2_info_dict['Chromosome copy number'])):
# get chromosome inds
_cid = int(_cid)
_cind = np.where(chr_ids == _cid)[0][0]
# get region indices
_start = int(_reg_info.split(':')[1].split('-')[0])
_rind = np.where(region_starts==_start)[0][0]
dna_zxys_list[_cind][_rind] = np.array([float(_z),float(_x), float(_y)])
# merge together
dna_zxys_list = np.array(dna_zxys_list)
data_rep2['chrom_ids'] = chr_ids
data_rep2['region_names'] = region_names
data_rep2['mid_position_Mb'] = mid_positions_Mb
data_rep2['dna_zxys'] = dna_zxys_list
# clean up tss and transcription
if 'Gene names' in rep2_info_dict:
import re
# first extract number of genes
gene_names = []
for _gene_info, _trans_info, _tss_coord in zip(rep2_info_dict['Gene names'],
rep2_info_dict['Transcription'],
rep2_info_dict['TSS ZXY(nm)']):
if _gene_info != '':
# split by semicolon
_genes = _gene_info.split(';')[:-1]
for _gene in _genes:
if _gene not in gene_names:
gene_names.append(_gene)
print(f"{len(gene_names)} genes exist in this dataset.")
# initialize gene and transcription
tss_zxys_list = [[[] for _gene in gene_names] for _id in chr_ids]
transcription_profiles = [[[] for _gene in gene_names] for _id in chr_ids]
# loop through to get info
for _cid, _gene_info, _trans_info, _tss_locations in tqdm(zip(rep2_info_dict['Chromosome copy number'],
rep2_info_dict['Gene names'],
rep2_info_dict['Transcription'],
rep2_info_dict['TSS ZXY(nm)'])):
# get chromosome inds
_cid = int(_cid)
_cind = np.where(chr_ids == _cid)[0][0]
# process if there are genes in this region:
if _gene_info != '':
# split by semicolon
_genes = _gene_info.split(';')[:-1]
_transcribes = _trans_info.split(';')[:-1]
_tss_zxys = _tss_locations.split(';')[:-1]
for _gene, _transcribe, _tss_zxy in zip(_genes, _transcribes, _tss_zxys):
# get gene index
_gind = gene_names.index(_gene)
# get transcription profile
if _transcribe == 'on':
transcription_profiles[_cind][_gind] = True
else:
transcription_profiles[_cind][_gind] = False
# get coordinates
_tss_zxy = np.array([np.float(_c) for _c in re.split(r'\s+', _tss_zxy.split('[')[1].split(']')[0]) if _c != ''])
tss_zxys_list[_cind][_gind] = _tss_zxy
tss_zxys_list = np.array(tss_zxys_list)
transcription_profiles = np.array(transcription_profiles)
data_rep2['gene_names'] = gene_names
data_rep2['tss_zxys'] = tss_zxys_list
data_rep2['trans_pfs'] = transcription_profiles
# clean up cell_cycle states
if 'Cell cycle state' in rep2_info_dict:
cell_cycle_types = np.unique(rep2_info_dict['Cell cycle state'])
cell_cycle_flag_dict = {_k:[[] for _id in chr_ids] for _k in cell_cycle_types if _k != 'ND'}
for _cid, _state in tqdm(zip(rep2_info_dict['Chromosome copy number'],rep2_info_dict['Cell cycle state'])):
# get chromosome inds
_cid = int(_cid)
_cind = np.where(chr_ids == _cid)[0][0]
if np.array([_v[_cind]==[] for _k,_v in cell_cycle_flag_dict.items()]).any():
for _k,_v in cell_cycle_flag_dict.items():
if _k == _state:
_v[_cind] = True
else:
_v[_cind] = False
# append to data
for _k, _v in cell_cycle_flag_dict.items():
data_rep2[f'{_k}_flags'] = np.array(_v)
###Output
_____no_output_____
###Markdown
call domains for rep2
###Code
import source.domain_tools.DomainAnalysis as da
import multiprocessing as mp
num_threads=32
domain_corr_cutoff = 0.75
domain_dist_cutoff = 500 # nm
_domain_args = [(_zxys, 4, 1000, domain_corr_cutoff, domain_dist_cutoff)
for _zxys in data_rep2['dna_zxys']]
_domain_time = time.time()
print(f"Multiprocessing call domain starts", end=' ')
if 'domain_starts' not in data_rep2:
with mp.Pool(num_threads) as domain_pool:
domain_results = domain_pool.starmap(da.get_dom_starts_cor, _domain_args)
domain_pool.close()
domain_pool.join()
domain_pool.terminate()
# save
data_rep2['domain_starts'] = [np.array(_r[-1]) for _r in domain_results]
data_rep2['params']['domain_corr_cutoff'] = domain_corr_cutoff
data_rep2['params']['domain_dist_cutoff'] = domain_dist_cutoff
print(f"in {time.time()-_domain_time:.3f}s.")
from tqdm import tqdm_notebook as tqdm
def rg_mean(zxy):
"""computes radius of gyration"""
zxy_ = np.array(zxy)
zxy_ = zxy_[~np.isnan(zxy_[:,0])]
zxy_ = zxy_ - np.mean(zxy_,0)
return np.sqrt(np.mean(np.sum(zxy_**2,axis=-1)))
g1_rgs = []
g2_rgs = []
for _i, (pt_,doms_) in tqdm(enumerate(zip(data_rep2['dna_zxys'],data_rep2['domain_starts']))):
for i1,i2 in zip(doms_[1:-2],doms_[2:-1]):
if data_rep2['G1_flags'][_i]:
g1_rgs.append(rg_mean(pt_[i1:i2]))
elif data_rep2['G2/S_flags'][_i]:
g2_rgs.append(rg_mean(pt_[i1:i2]))
g1_rgs = np.array(g1_rgs)
g2_rgs = np.array(g2_rgs)
%matplotlib inline
rg_limits = [0,1500]
fig, ax = plt.subplots(figsize=(_single_col_width, _single_col_width),dpi=600)
ax.hist(g1_rgs, 50, range=(min(rg_limits),max(rg_limits)),
density=True, alpha=0.5,
color=[0.2,0.5,0.5], label=f'G1, median={np.nanmedian(g1_rgs):.0f}nm')
ax.hist(g2_rgs, 50, range=(min(rg_limits),max(rg_limits)),
density=True, alpha=0.5,
color=[1,0.2,0.2], label=f'G2/S, median={np.nanmedian(g2_rgs):.0f}nm')
ax.legend(fontsize=_font_size-1, loc='upper right')
ax.set_xlabel("Radius of gyration (nm)", fontsize=_font_size, labelpad=1)
ax.set_ylabel("Probability density", fontsize=_font_size, labelpad=1)
ax.tick_params('both', labelsize=_font_size,
width=_ticklabel_width, length=_ticklabel_size,
pad=1, labelleft=True) # remove bottom ticklabels for a_ax
[i[1].set_linewidth(_ticklabel_width) for i in ax.spines.items()]
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_xlim(rg_limits)
plt.gcf().subplots_adjust(bottom=0.15, left=0.15)
plt.savefig(os.path.join(figure_folder, f'LFig5A_chr21-repeat_radius_of_gyration_G1-G2.pdf'), transparent=True)
plt.show()
from tqdm import tqdm_notebook as tqdm
resolution = 0.05 # Mb
g1_gsizes = []
g2_gsizes = []
for _i, (pt_,doms_) in tqdm(enumerate(zip(data_rep2['dna_hzxys'][:,:,1:],data_rep2['domain_starts']))):
for i1,i2 in zip(doms_[1:-2],doms_[2:-1]):
if data_rep2['G1_flags'][_i]:
g1_gsizes.append((i2-i1)*resolution)
elif data_rep2['G2_flags'][_i]:
g2_gsizes.append((i2-i1)*resolution)
g1_gsizes = np.array(g1_gsizes)
g2_gsizes = np.array(g2_gsizes)
%matplotlib inline
rg_limits = [0,4]
fig, ax = plt.subplots(figsize=(_single_col_width, _single_col_width),dpi=600)
ax.hist(g1_gsizes, 40, range=(min(rg_limits),max(rg_limits)),
density=True, alpha=0.5,
color=[0.2,0.5,0.5], label=f'G1, median={np.nanmedian(g1_gsizes):.2f}Mb')
ax.hist(g2_gsizes, 40, range=(min(rg_limits),max(rg_limits)),
density=True, alpha=0.5,
color=[1,0.2,0.2], label=f'G2/S, median={np.nanmedian(g2_gsizes):.2f}Mb')
ax.legend(fontsize=_font_size-1, loc='upper right')
ax.set_xlabel("Genomic size (Mb)", fontsize=_font_size, labelpad=1)
ax.set_ylabel("Probability density", fontsize=_font_size, labelpad=1)
ax.tick_params('both', labelsize=_font_size,
width=_ticklabel_width, length=_ticklabel_size,
pad=1, labelleft=True) # remove bottom ticklabels for a_ax
[i[1].set_linewidth(_ticklabel_width) for i in ax.spines.items()]
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_xlim(rg_limits)
plt.gcf().subplots_adjust(bottom=0.15, left=0.15)
plt.savefig(os.path.join(figure_folder, f'LFig5B_chr21-repeat_domain_size_G1-G2.pdf'), transparent=True)
plt.show()
###Output
_____no_output_____
|
udacity_machine_learning_notes/intro_data_science/lesson_01/lesson_01.ipynb
|
###Markdown
Course Description [here](https://www.udacity.com/course/intro-to-data-science--ud359) Lesson 01 - Intro to Data ScienceSimpson's Paradox- [Data](http://www.calvin.edu/~stob/data/Berkeley.csv)- [Analysis](http://vudlab.com/simpsons/)- [Explanation](https://youtu.be/fDcjhAKuhqQ?t=1519)Video 10 Continue from
###Code
###Output
_____no_output_____
|
Trending Technologies/2019-2021/Text data processing 2019 - 2021.ipynb
|
###Markdown
We get column 'clean' which is ready for analysis
###Code
df['clean'].head()
###Output
_____no_output_____
###Markdown
Look at the code below to understand the function clean more clearly
###Code
line = "hello, this 1 sentence is just for understanding the function. "
tok = word_tokenize(line.strip())
tok
cle = [i for i in tok if i not in stopwords.words('english')]
cle
punctuations = list(string.punctuation)
cle = [i.strip(''.join(punctuations)) for i in cle if i not in punctuations]
cle
' '.join(cle)
###Output
_____no_output_____
|
Week 1/LinAlg_58051_Imbat_Python_Programming.ipynb
|
###Markdown
Lab 1: Python Programming
###Code
name = input("Student Name: ")
course = input("Student Course: ")
prelim_grade = float(input("Prelim Grade:"))
midterm_grade = float(input("Midterm Grade:"))
finals_grade = float(input("Finals Grade:"))
average = float(0)
average = (prelim_grade*0.3 + midterm_grade*0.3 + finals_grade*0.4)
happy = "\U0001F600"
laughing = "\U0001F606"
sad = "\U0001F62D"
if (average > 70.00):
print("Semestral Grade: {:.2f}".format(average),happy)
if (average == 70.00):
print("Semestral Grade: {:.2f}".format(average),laughing)
if (average < 70.00):
print("Semestral Grade: {:.2f}".format(average),sad)
###Output
_____no_output_____
|
notebooks/PythonMysql.ipynb
|
###Markdown
Programming and Database Fundamentals for Data Scientists - EAS503 The goal of PyMySQL is to be a drop-in replacement for MySQLdb and work on CPython, PyPy and IronPython. Installation```scriptpip install PyMySQL```or```scriptconda install PyMySQL```Might need `sudo` privileges depending on your Python installation. Start with a simple tableRun the following in your database```sqlCREATE TABLE `users` ( `id` int(11) NOT NULL AUTO_INCREMENT, `email` varchar(255) COLLATE utf8_bin NOT NULL, `password` varchar(255) COLLATE utf8_bin NOT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_binAUTO_INCREMENT=1 ;```
###Code
import pymysql.cursors
# Connect to the database
connection = pymysql.connect(host='localhost',
user='root',
password='root',
db='eas503db',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
try:
with connection.cursor() as cursor:
# Create a new record
sql = "INSERT INTO `users` (`email`, `password`) VALUES (%s, %s)"
cursor.execute(sql, ('[email protected]', 'very-very-secret'))
#sql = "INSERT INTO `users` (`email`, `password`) VALUES ('[email protected]', 'very-very-secret')"
#cursor.execute(sql)
# connection is not autocommit by default. So you must commit to save
# your changes.
connection.commit()
finally:
connection.close()
# Connect to the database
connection = pymysql.connect(host='localhost',
user='root',
password='root',
db='eas503db',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
try:
with connection.cursor() as cursor:
# Create a new record
sql = "DELETE FROM `users`"
cursor.execute(sql)
# connection is not autocommit by default. So you must commit to save
# your changes.
connection.commit()
finally:
connection.close()
# Connect to the database
connection = pymysql.connect(host='localhost',
user='root',
password='root',
db='eas503db',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
try:
with connection.cursor() as cursor:
# Read a single record
sql = "SELECT `id`, `password`, `email` FROM `users`"
cursor.execute(sql)
#result = cursor.fetchone()
result = cursor.fetchall()
print(result)
finally:
connection.close()
###Output
[{'id': 1, 'password': 'very-very-secret', 'email': '[email protected]'}, {'id': 2, 'password': 'very-very-secret', 'email': '[email protected]'}]
###Markdown
Using Pandas library
###Code
import pandas as pd
# Connect to the database
connection = pymysql.connect(host='localhost',
user='root',
password='root',
db='eas503db',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
df = pd.read_sql('SELECT * FROM users', con=connection)
df
###Output
_____no_output_____
###Markdown
Querying HR database
###Code
import time
connection = pymysql.connect(host='localhost',
user='root',
password='root',
db='employees',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
st = time.time()
df = pd.read_sql('''SELECT E.first_name,E.last_name,Y.salary
FROM employees E JOIN (
SELECT S.emp_no,S.salary
FROM salaries S JOIN (
SELECT emp_no,MAX(from_date) AS from_date
FROM salaries
GROUP BY emp_no) AS X
ON S.emp_no = X.emp_no AND S.from_date = X.from_date
) Y
ON E.emp_no = Y.emp_no
WHERE E.hire_date >= \'2000-01-01\' ''', con=connection)
en = time.time()
print(en-st)
df
connection.close()
###Output
_____no_output_____
###Markdown
AlternativePull tables employees and salaries into Pandas and then use `pd.join()` and additional operations to get the above dataframe
###Code
st = time.time()
df = pd.read_sql('''SELECT E.first_name,E.last_name,E.hire_date,Y.salary
FROM employees E JOIN (
SELECT S.emp_no,S.salary
FROM salaries S JOIN (
SELECT emp_no,MAX(from_date) AS from_date
FROM salaries
GROUP BY emp_no) AS X
ON S.emp_no = X.emp_no AND S.from_date = X.from_date
) Y
ON E.emp_no = Y.emp_no ''', con=connection)
en = time.time()
print(en-st)
df.head()
###Output
_____no_output_____
|
Extracting_transforming_selecting_features/featureExtraction.ipynb
|
###Markdown
Feature ExtractorsExtraction: Extracting features from “raw” data+ TF-IDF+ Word2Vec+ CountVectorizer+ FeatureHasher Tf-IDF
###Code
import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("Feature Extractor").getOrCreate()
from pyspark.ml.feature import HashingTF, IDF, Tokenizer
sentenceData = spark.createDataFrame([
(0.0, "Hi I heard about Spark"),
(0.0, "I wish Java could use case classes"),
(1.0, "Logistic regression models are neat")
], ["label", "sentence"])
sentenceData.show()
tokenizer = Tokenizer(inputCol='sentence', outputCol='words')
wordsData = tokenizer.transform(sentenceData)
hashingTF = HashingTF(inputCol='words', outputCol='rawFeatures',numFeatures=20)
featurizedData = hashingTF.transform(wordsData)
idf = IDF(inputCol='rawFeatures', outputCol='features')
idfModel = idf.fit(featurizedData)
rescaledData = idfModel.transform(featurizedData)
rescaledData.select("label",'features').show(truncate=False)
###Output
+-----+-------------------------------------------------------------------------------------------------------------------------------------------+
|label|features |
+-----+-------------------------------------------------------------------------------------------------------------------------------------------+
|0.0 |(20,[6,8,13,16],[0.28768207245178085,0.6931471805599453,0.28768207245178085,0.5753641449035617]) |
|0.0 |(20,[0,2,7,13,15,16],[0.6931471805599453,0.6931471805599453,1.3862943611198906,0.28768207245178085,0.6931471805599453,0.28768207245178085])|
|1.0 |(20,[3,4,6,11,19],[0.6931471805599453,0.6931471805599453,0.28768207245178085,0.6931471805599453,0.6931471805599453]) |
+-----+-------------------------------------------------------------------------------------------------------------------------------------------+
###Markdown
Word2VecWord2Vec is an Estimator which takes sequences of words representing documents and trains a Word2VecModel. The model maps each word to a unique fixed-size vector. The Word2VecModel transforms each document into a vector using the average of all words in the document; this vector can then be used as features for prediction, document similarity calculations, etc.
###Code
from pyspark.ml.feature import Word2Vec
# Input data: Each row is a bag of words from a sentence or document.
documentDF = spark.createDataFrame([
("Hi I heard about Spark".split(" "), ),
("I wish Java could use case classes".split(" "), ),
("Logistic regression models are neat".split(" "), )
], ["text"])
documentDF.show(truncate=False)
word2vec = Word2Vec(vectorSize=3, minCount=0, inputCol='text', outputCol='result')
model = word2vec.fit(documentDF).transform(documentDF)
model.show(truncate=False)
for row in model.collect():
text, vector = row
print(f"Text:- {' '.join(text)} => vector:- {vector}")
###Output
Text:- Hi I heard about Spark => vector:- [0.10757351368665696,0.005313180014491082,0.02163493409752846]
Text:- I wish Java could use case classes => vector:- [0.02210963943174907,-0.03750888577529362,0.046501401013561657]
Text:- Logistic regression models are neat => vector:- [-0.023545664548873902,-0.036877965182065965,0.0036725979298353195]
###Markdown
CountVectorizerCountVectorizer and CountVectorizerModel aim to help convert a collection of text documents to vectors of token counts. When an a-priori dictionary is not available, CountVectorizer can be used as an Estimator to extract the vocabulary, and generates a CountVectorizerModel. The model produces sparse representations for the documents over the vocabulary, which can then be passed to other algorithms like LDA.During the fitting process, CountVectorizer will select the top vocabSize words ordered by term frequency across the corpus. An optional parameter minDF also affects the fitting process by specifying the minimum number (or fraction if < 1.0) of documents a term must appear in to be included in the vocabulary. Another optional binary toggle parameter controls the output vector. If set to true all nonzero counts are set to 1. This is especially useful for discrete probabilistic models that model binary, rather than integer, counts.
###Code
from pyspark.ml.feature import CountVectorizer
df = spark.createDataFrame([
(0,"a b c".split(" ")),
(1, "a b b c a".split(" "))
], ['id','words'])
df.show()
# fit a countvectorizer from the corpus
cv = CountVectorizer(inputCol='words', outputCol='features',
vocabSize=3, minDF=2.0)
model = cv.fit(df)
result = model.transform(df)
result.show(truncate=False)
###Output
+---+---------------+-------------------------+
|id |words |features |
+---+---------------+-------------------------+
|0 |[a, b, c] |(3,[0,1,2],[1.0,1.0,1.0])|
|1 |[a, b, b, c, a]|(3,[0,1,2],[2.0,2.0,1.0])|
+---+---------------+-------------------------+
###Markdown
FeatureHasherFeature hashing projects a set of categorical or numerical features into a feature vector of specified dimension (typically substantially smaller than that of the original feature space). This is done using the hashing trick to map features to indices in the feature vector.The FeatureHasher transformer operates on multiple columns. Each column may contain either numeric or categorical features. Behavior and handling of column data types is as follows:- Numeric columns: For numeric features, the hash value of the column name is used to map the feature value to its index in the feature vector. By default, numeric features are not treated as categorical (even when they are integers). To treat them as categorical, specify the relevant columns using the categoricalCols parameter.- String columns: For categorical features, the hash value of the string “column_name=value” is used to map to the vector index, with an indicator value of 1.0. Thus, categorical features are “one-hot” encoded (similarly to using OneHotEncoder with dropLast=false).- Boolean columns: Boolean values are treated in the same way as string columns. That is, boolean features are represented as “column_name=true” or “column_name=false”, with an indicator value of 1.0.
###Code
from pyspark.ml.feature import FeatureHasher
dataset = spark.createDataFrame([
(2.2, True, "1", "foo"),
(3.3, False, "2", "bar"),
(4.4, False, "3", "baz"),
(5.5, False, "4", "foo")
], ["real", "bool", "stringNum", "string"])
dataset.show()
hasher = FeatureHasher(inputCols=['real','bool','stringNum','string'],outputCol='features')
featurized = hasher.transform(dataset)
featurized.show(truncate=False)
spark.stop()
###Output
_____no_output_____
|
tutorial/source/03-svi_part_i-v2.ipynb
|
###Markdown
*通过变分推断*- 一个生成模型, model 包含三个输入: observations, latent random variables, and parameters. (guide 就没有 observations)- 而训练一个模型用户需要指定三个输入:the model, the guide, and an optimizer. SVI Part I: An Introduction to Stochastic Variational Inference in PyroPyro has been designed with particular attention paid to supporting stochastic variational inference as a general purpose inference algorithm. Let's see how we go about doing variational inference in Pyro. Pyro 设计中特别关注了如何支持随机变分推断所谓一种通用推断算法。 SetupWe're going to assume we've already defined our model in Pyro (for more details on how this is done see [Intro Part I](intro_part_i.ipynb)).As a quick reminder, the model is given as a stochastic function `model(*args, **kwargs)`, which, in the general case takes arguments. The different pieces of `model()` are encoded via the mapping:1. observations $\Longleftrightarrow$ `pyro.sample` with the `obs` argument2. latent random variables $\Longleftrightarrow$ `pyro.sample`3. parameters $\Longleftrightarrow$ `pyro.param`Now let's establish some notation. The model has observations ${\bf x}$ and latent random variables ${\bf z}$ as well as parameters $\theta$. It has a joint probability density of the form $$p_{\theta}({\bf x}, {\bf z}) = p_{\theta}({\bf x}|{\bf z}) p_{\theta}({\bf z})$$We assume that the various probability distributions $p_i$ that make up $p_{\theta}({\bf x}, {\bf z})$ have the following properties:1. we can sample from each $p_i$2. we can compute the pointwise log pdf $p_i$ 3. $p_i$ is differentiable w.r.t. the parameters $\theta$ 模型的意思是 stochastic function `model(*args, **kwargs)`。 一个模型包含如下三个基础映射:1. observations $\Longleftrightarrow$ `pyro.sample` with the `obs` argument2. latent random variables $\Longleftrightarrow$ `pyro.sample`3. parameters $\Longleftrightarrow$ `pyro.param`我们来看看变分推断背后的数学形式。一个模型有观测数据 ${\bf x}$, 潜变量 ${\bf z}$ 和参数 $\theta$. 则联合分布如下:$$p_{\theta}({\bf x}, {\bf z}) = p_{\theta}({\bf x}|{\bf z}) p_{\theta}({\bf z})$$我们假定分布 $p_i$ 具备形式 $p_{\theta}({\bf x}, {\bf z})$ 有如下性质:1. we can sample from each $p_i$2. we can compute the pointwise log pdf $p_i$ 3. $p_i$ is differentiable w.r.t. the parameters $\theta$---- Model LearningIn this context our criterion for learning a good model will be maximizing the log evidence, i.e. we want to find the value of $\theta$ given by$$\theta_{\rm{max}} = \underset{\theta}{\operatorname{argmax}} \log p_{\theta}({\bf x})$$where the log evidence $\log p_{\theta}({\bf x})$ is given by$$\log p_{\theta}({\bf x}) = \log \int\! d{\bf z}\; p_{\theta}({\bf x}, {\bf z})$$In the general case this is a doubly difficult problem. This is because (even for a fixed $\theta$) the integral over the latent random variables $\bf z$ is often intractable. Furthermore, even if we know how to calculate the log evidence for all values of $\theta$, maximizing the log evidence as a function of $\theta$ will in general be a difficult non-convex optimization problem. In addition to finding $\theta_{\rm{max}}$, we would like to calculate the posterior over the latent variables $\bf z$:$$ p_{\theta_{\rm{max}}}({\bf z} | {\bf x}) = \frac{p_{\theta_{\rm{max}}}({\bf x} , {\bf z})}{\int \! d{\bf z}\; p_{\theta_{\rm{max}}}({\bf x} , {\bf z}) } $$Note that the denominator of this expression is the (usually intractable) evidence. Variational inference offers a scheme for finding $\theta_{\rm{max}}$ and computing an approximation to the posterior $p_{\theta_{\rm{max}}}({\bf z} | {\bf x})$. Let's see how that works. 我们使用极大思然估计去求的 $\theta_{max}$ 的思路会遇到很多麻烦。变分推断的目的是一方面估计出来联合分布的参数(也就是模型参数,得到生成模型),另外一个方面是得到后验。 GuideThe basic idea is that we introduce a parameterized distribution $q_{\phi}({\bf z})$, where $\phi$ are known as the variational parameters. This distribution is called the variational distribution in much of the literature, and in the context of Pyro it's called the **guide** (one syllable instead of nine!). The guide will serve as an approximation to the posterior. 基本的想法是用一个带参分布 $q_\phi(z)$ 来近似后验分布 $p_\theta(z|x)$, $q$ 被称作 variational distribution, 而在 Pyro 中我们叫做 guide 用于近似后验分布。 Just like the model, the guide is encoded as a stochastic function `guide()` that contains `pyro.sample` and `pyro.param` statements. It does _not_ contain observed data, since the guide needs to be a properly normalized distribution. Note that Pyro enforces that `model()` and `guide()` have the same call signature, i.e. both callables should take the same arguments. Since the guide is an approximation to the posterior $p_{\theta_{\rm{max}}}({\bf z} | {\bf x})$, the guide needs to provide a valid joint probability density over all the latent random variables in the model. Recall that when random variables are specified in Pyro with the primitive statement `pyro.sample()` the first argument denotes the name of the random variable. These names will be used to align the random variables in the model and guide. To be very explicit, if the model contains a random variable `z_1````pythondef model(): pyro.sample("z_1", ...)```then the guide needs to have a matching `sample` statement```pythondef guide(): pyro.sample("z_1", ...)```The distributions used in the two cases can be different, but the names must line-up 1-to-1. Once we've specified a guide (we give some explicit examples below), we're ready to proceed to inference.Learning will be setup as an optimization problem where each iteration of training takes a step in $\theta-\phi$ space that moves the guide closer to the exact posterior.To do this we need to define an appropriate objective function. 由于 model 与 guide 的对应关心,所以其 names will be used to align the random variables in the model and guide. 下一步我们定义一个合适的目标函数。 ELBOA simple derivation (for example see reference [1]) yields what we're after: the evidence lower bound (ELBO). The ELBO, which is a function of both $\theta$ and $\phi$, is defined as an expectation w.r.t. to samples from the guide:$${\rm ELBO} \equiv \mathbb{E}_{q_{\phi}({\bf z})} \left [ \log p_{\theta}({\bf x}, {\bf z}) - \log q_{\phi}({\bf z})\right]$$ The evidence lower bound 是常见的目标函数。 $ \log p_{\theta}({\bf x}) = {\rm ELBO} + \rm{KL}\!\left( q_{\phi}({\bf z}) \lVert p_{\theta}({\bf z} | {\bf x}) \right) \geq ELBO $ 所以被叫做证据下届估计。 By assumption we can compute the log probabilities inside the expectation. And since the guide is assumed to be a parametric distribution we can sample from, we can compute Monte Carlo estimates of this quantity. Crucially, the ELBO is a lower bound to the log evidence, i.e. for all choices of $\theta$ and $\phi$ we have that $$\log p_{\theta}({\bf x}) \ge {\rm ELBO} $$So if we take (stochastic) gradient steps to maximize the ELBO, we will also be pushing the log evidence higher (in expectation). Furthermore, it can be shown that the gap between the ELBO and the log evidence is given by the KL divergence between the guide and the posterior:$$ \log p_{\theta}({\bf x}) - {\rm ELBO} = \rm{KL}\!\left( q_{\phi}({\bf z}) \lVert p_{\theta}({\bf z} | {\bf x}) \right) $$This KL divergence is a particular (non-negative) measure of 'closeness' between two distributions. So, for a fixed $\theta$, as we take steps in $\phi$ space that increase the ELBO, we decrease the KL divergence between the guide and the posterior, i.e. we move the guide towards the posterior. In the general case we take gradient steps in both $\theta$ and $\phi$ space simultaneously so that the guide and model play chase, with the guide tracking a moving posterior $\log p_{\theta}({\bf z} | {\bf x})$. Perhaps somewhat surprisingly, despite the moving target, this optimization problem can be solved (to a suitable level of approximation) for many different problems.So at high level variational inference is easy: all we need to do is define a guide and compute gradients of the ELBO. Actually, computing gradients for general model and guide pairs leads to some complications (see the tutorial [SVI Part III](svi_part_iii.ipynb) for a discussion). For the purposes of this tutorial, let's consider that a solved problem and look at the support that Pyro provides for doing variational inference. KL散度的定义:$$KL(p(x)|q(x)) = E_{p(x)}\log \frac{p(x)}{q(x)}$$ `SVI` ClassIn Pyro the machinery for doing variational inference is encapsulated in the `SVI` class. (At present `SVI` only provides support for the ELBO objective, but in the future Pyro will provide support for alternative variational objectives.) 在 Pyro 中变分推断被封装在 `SVI` 的类中,当前只支持 ELBO 目标函数。 The user needs to provide three things: the model, the guide, and an optimizer. We've discussed the model and guide above and we'll discuss the optimizer in some detail below, so let's assume we have all three ingredients at hand. To construct an instance of `SVI` that will do optimization via the ELBO objective, the user writes```pythonimport pyrofrom pyro.infer import SVI, Trace_ELBOsvi = SVI(model, guide, optimizer, loss=Trace_ELBO())```The `SVI` object provides two methods, `step()` and `evaluate_loss()`, that encapsulate the logic for variational learning and evaluation:1. The method `step()` takes a single gradient step and returns an estimate of the loss (i.e. minus the ELBO). If provided, the arguments to `step()` are piped to `model()` and `guide()`. 2. The method `evaluate_loss()` returns an estimate of the loss _without_ taking a gradient step. Just like for `step()`, if provided, arguments to `evaluate_loss()` are piped to `model()` and `guide()`.For the case where the loss is the ELBO, both methods also accept an optional argument `num_particles`, which denotes the number of samples used to compute the loss (in the case of `evaluate_loss`) and the loss and gradient (in the case of `step`). 用户需要指定三个输入:the model, the guide, and an optimizer. OptimizersIn Pyro, the model and guide are allowed to be arbitrary stochastic functions provided that1. `guide` doesn't contain `pyro.sample` statements with the `obs` argument2. `model` and `guide` have the same call signatureThis presents some challenges because it means that different executions of `model()` and `guide()` may have quite different behavior, with e.g. certain latent random variables and parameters only appearing some of the time. Indeed parameters may be created dynamically during the course of inference. In other words the space we're doing optimization over, which is parameterized by $\theta$ and $\phi$, can grow and change dynamically.In order to support this behavior, Pyro needs to dynamically generate an optimizer for each parameter the first time it appears during learning. Luckily, PyTorch has a lightweight optimization library (see [torch.optim](http://pytorch.org/docs/master/optim.html)) that can easily be repurposed for the dynamic case. All of this is controlled by the `optim.PyroOptim` class, which is basically a thin wrapper around PyTorch optimizers. `PyroOptim` takes two arguments: a constructor for PyTorch optimizers `optim_constructor` and a specification of the optimizer arguments `optim_args`. At high level, in the course of optimization, whenever a new parameter is seen `optim_constructor` is used to instantiate a new optimizer of the given type with arguments given by `optim_args`. Most users will probably not interact with `PyroOptim` directly and will instead interact with the aliases defined in `optim/__init__.py`. Let's see how that goes. There are two ways to specify the optimizer arguments. In the simpler case, `optim_args` is a _fixed_ dictionary that specifies the arguments used to instantiate PyTorch optimizers for _all_ the parameters:```pythonfrom pyro.optim import Adamadam_params = {"lr": 0.005, "betas": (0.95, 0.999)}optimizer = Adam(adam_params)```The second way to specify the arguments allows for a finer level of control. Here the user must specify a callable that will be invoked by Pyro upon creation of an optimizer for a newly seen parameter. This callable must have the following signature:1. `module_name`: the Pyro name of the module containing the parameter, if any2. `param_name`: the Pyro name of the parameterThis gives the user the ability to, for example, customize learning rates for different parameters. For an example where this sort of level of control is useful, see the [discussion of baselines](svi_part_iii.ipynb). Here's a simple example to illustrate the API:```pythonfrom pyro.optim import Adamdef per_param_callable(module_name, param_name): if param_name == 'my_special_parameter': return {"lr": 0.010} else: return {"lr": 0.001}optimizer = Adam(per_param_callable)```This simply tells Pyro to use a learning rate of `0.010` for the Pyro parameter `my_special_parameter` and a learning rate of `0.001` for all other parameters. 用 SVI 求 Beta 分布的后验分布We finish with a simple example. You've been given a two-sided coin. You want to determine whether the coin is fair or not, i.e. whether it falls heads or tails with the same frequency. You have a prior belief about the likely fairness of the coin based on two observations:- it's a standard quarter issued by the US Mint- it's a bit banged up from years of use 我们以一个简单关于掷硬币的例子结束本章。 So while you expect the coin to have been quite fair when it was first produced, you allow for its fairness to have since deviated from a perfect 1:1 ratio. So you wouldn't be surprised if it turned out that the coin preferred heads over tails at a ratio of 11:10. By contrast you would be very surprised if it turned out that the coin preferred heads over tails at a ratio of 5:1—it's not _that_ banged up.To turn this into a probabilistic model we encode heads and tails as `1`s and `0`s. We encode the fairness of the coin as a real number $f$, where $f$ satisfies $f \in [0.0, 1.0]$ and $f=0.50$ corresponds to a perfectly fair coin. Our prior belief about $f$ will be encoded by a beta distribution, specifically $\rm{Beta}(10,10)$, which is a symmetric probability distribution on the interval $[0.0, 1.0]$ that is peaked at $f=0.5$.
###Code
<center><figure><img src="_static/img/beta.png" style="width: 300px;"><figcaption> <font size="-1"><b>Figure 1</b>: The distribution Beta that encodes our prior belief about the fairness of the coin. </font></figcaption></figure></center>
###Output
_____no_output_____
###Markdown
To learn something about the fairness of the coin that is more precise than our somewhat vague prior, we need to do an experiment and collect some data. Let's say we flip the coin 10 times and record the result of each flip. In practice we'd probably want to do more than 10 trials, but hey this is a tutorial.Assuming we've collected the data in a list `data`, the corresponding model is given by```pythonimport pyro.distributions as distdef model(data): define the hyperparameters that control the beta prior alpha0 = torch.tensor(10.0) beta0 = torch.tensor(10.0) sample f from the beta prior f = pyro.sample("latent_fairness", dist.Beta(alpha0, beta0)) loop over the observed data for i in range(len(data)): observe datapoint i using the bernoulli likelihood Bernoulli(f) pyro.sample("obs_{}".format(i), dist.Bernoulli(f), obs=data[i])```Here we have a single latent random variable (`'latent_fairness'`), which is distributed according to $\rm{Beta}(10, 10)$. Conditioned on that random variable, we observe each of the datapoints using a bernoulli likelihood. Note that each observation is assigned a unique name in Pyro.Our next task is to define a corresponding guide, i.e. an appropriate variational distribution for the latent random variable $f$. The only real requirement here is that $q(f)$ should be a probability distribution over the range $[0.0, 1.0]$, since $f$ doesn't make sense outside of that range. A simple choice is to use another beta distribution parameterized by two trainable parameters $\alpha_q$ and $\beta_q$. Actually, in this particular case this is the 'right' choice, since conjugacy of the bernoulli and beta distributions means that the exact posterior is a beta distribution. In Pyro we write:```pythondef guide(data): register the two variational parameters with Pyro. alpha_q = pyro.param("alpha_q", torch.tensor(15.0), constraint=constraints.positive) beta_q = pyro.param("beta_q", torch.tensor(15.0), constraint=constraints.positive) sample latent_fairness from the distribution Beta(alpha_q, beta_q) pyro.sample("latent_fairness", dist.Beta(alpha_q, beta_q))```There are a few things to note here:- We've taken care that the names of the random variables line up exactly between the model and guide.- `model(data)` and `guide(data)` take the same arguments.- The variational parameters are `torch.tensor`s. The `requires_grad` flag is automatically set to `True` by `pyro.param`.- We use `constraint=constraints.positive` to ensure that `alpha_q` and `beta_q` remain non-negative during optimization.Now we can proceed to do stochastic variational inference. ```python set up the optimizeradam_params = {"lr": 0.0005, "betas": (0.90, 0.999)}optimizer = Adam(adam_params) setup the inference algorithmsvi = SVI(model, guide, optimizer, loss=Trace_ELBO())n_steps = 5000 do gradient stepsfor step in range(n_steps): svi.step(data)``` Note that in the `step()` method we pass in the data, which then get passed to the model and guide. The only thing we're missing at this point is some data. So let's create some data and assemble all the code snippets above into a complete script:
###Code
import math
import os
import torch
import torch.distributions.constraints as constraints
import pyro
from pyro.optim import Adam
from pyro.infer import SVI, Trace_ELBO
import pyro.distributions as dist
# this is for running the notebook in our testing framework
smoke_test = ('CI' in os.environ)
n_steps = 2 if smoke_test else 2000
# enable validation (e.g. validate parameters of distributions)
assert pyro.__version__.startswith('0.4.1')
pyro.enable_validation(True)
# clear the param store in case we're in a REPL
pyro.clear_param_store()
# create some data with 6 observed heads and 4 observed tails
data = []
for _ in range(6):
data.append(torch.tensor(1.0))
for _ in range(4):
data.append(torch.tensor(0.0))
def model(data):
# define the hyperparameters that control the beta prior
alpha0 = torch.tensor(10.0)
beta0 = torch.tensor(10.0)
# sample f from the beta prior
f = pyro.sample("latent_fairness", dist.Beta(alpha0, beta0))
# loop over the observed data
for i in range(len(data)):
# observe datapoint i using the bernoulli likelihood
pyro.sample("obs_{}".format(i), dist.Bernoulli(f), obs=data[i])
def guide(data):
# register the two variational parameters with Pyro
# - both parameters will have initial value 15.0.
# - because we invoke constraints.positive, the optimizer
# will take gradients on the unconstrained parameters
# (which are related to the constrained parameters by a log)
alpha_q = pyro.param("alpha_q", torch.tensor(15.0),
constraint=constraints.positive)
beta_q = pyro.param("beta_q", torch.tensor(15.0),
constraint=constraints.positive)
# sample latent_fairness from the distribution Beta(alpha_q, beta_q)
pyro.sample("latent_fairness", dist.Beta(alpha_q, beta_q))
# setup the optimizer
adam_params = {"lr": 0.0005, "betas": (0.90, 0.999)}
optimizer = Adam(adam_params)
# setup the inference algorithm
svi = SVI(model, guide, optimizer, loss=Trace_ELBO())
# do gradient steps
for step in range(n_steps):
svi.step(data)
if step % 100 == 0:
print('.', end='')
# grab the learned variational parameters
alpha_q = pyro.param("alpha_q").item()
beta_q = pyro.param("beta_q").item()
# here we use some facts about the beta distribution
# compute the inferred mean of the coin's fairness
inferred_mean = alpha_q / (alpha_q + beta_q)
# compute inferred standard deviation
factor = beta_q / (alpha_q * (1.0 + alpha_q + beta_q))
inferred_std = inferred_mean * math.sqrt(factor)
print("\nbased on the data and our prior belief, the fairness " +
"of the coin is %.3f +- %.3f" % (inferred_mean, inferred_std))
###Output
....................
based on the data and our prior belief, the fairness of the coin is 0.531 +- 0.089
|
lqr_inverted_pendulum.ipynb
|
###Markdown
Inverted Pendulum on a Cart Model and Its Optimal Control Using Linear-Quadratic Regulator (LQR) *Maciej Manna* This project showcases physical model of inverted pendulum placed on a cart that is free to move horizontally in one dimension. This model is essentially non-linear. Also, we shall consider issues associated with controlling that system using input force that can be exerted to move the cart horizontally in either direction.Following problem will be addressed here:- physical model of inverted pendulum on a cart, its simulation and visualisation of results;- model linearisation and problem of its controllability;- examples of admissible controls and their influence on the bahaviour of a system;- finding optimal control for the system using linear-quadratic regulator (LQR).This project was inspired and based on examples given in Bruntona i Kutza ([1], s. 300-305). More on derivation of LQR method may be found e.g. in Evans' lectures ([2], s. 81-83).
###Code
%matplotlib inline
import math
import numpy as np
from scipy.integrate import solve_ivp
from numpy.linalg import matrix_rank
from scipy.linalg import eigvals
import matplotlib.pyplot as plt
from visualise import simple_plot, phase_plots #, sim_animation
from control import ctrb, lqr
from control.matlab import place
###Output
_____no_output_____
###Markdown
1. Inverted Pendulum on a Cart Model![Model odwróconego wachadła na wózku z zaznaczeniem wybranych wolnych zmiennych i parametrów. Żródło: [1], s.301](img/model.png)Above illustration shows physical system that we are modelling, outlining some crucial free variables and parameters ([1], s.301). This model has four **degrees of freedom** (free variables):- $x$ - horizontal position of the cart;- $v = \dot{x}$ - horizontal velocity of the cart;- $\theta$ - angle of deviation of stiff pendulum arm from vertical direction;- $\omega = \dot{\theta}$ - angular velocity of pendulum arm around joint linking it with the cart.We shall use $Y$ to denote full, four-dimensional state vector of our system, i.e. $Y = [ x \; v \; \theta \; \omega ]$. For its derivative with respect to time, we have: $\dot{Y} = [ \dot{x} \; \dot{v} \; \dot{\theta} \; \dot{\omega} ]$Moreover, analysed system is characterised by five physical parameters that are assumed to remain constant in time. These are (with default values used throughout this presentation; given values are without units, but SI units are assumed everywhere):- $M$ - mass of the cart ($M = 5$);- $m$ - masa of the pendulum, concentated at its end ($m = 1$);- $L$ - length of the pendulum ($L = 2$);- $g$ - gravitational acceleration ($g = -10$);- $\delta$ - damping factor (friction, air resistance, etc.) affecting the cart ($\delta = 1$).Moreover, as one of the parameters of the system we may consider $u = F$, that is the horizontal force exerted on the cart that causes change in its velocity. It will be our control parameter. For now, we shall assume that $u = 0$.
###Code
def get_params(*, M=5.0, m=1.0, L=2.0, g=-10.0,d=1.0, b=1.0):
return (M, m, L, g, d, b)
###Output
_____no_output_____
###Markdown
Non-linear dynamics of the system may be derived from basic principles of Newtonian dynamics. These are **equations describing evolution of the system** as given in [1] (8.67a-d, s. 300-301):$$\begin{cases}\frac{dx}{dt} = v \\\frac{dv}{dt} = \frac{1}{D} \left[ -m^2 L^2 g \cos \theta \sin \theta + m L^2 (m L \omega^2 \sin \theta - \delta v ) + m L^2 u) \right] \\\frac{d\theta}{dt} = \omega \\\frac{d\omega}{dt} = \frac{1}{D} \left[ (m + M)mgL \sin \theta - mL \cos \theta (mL \omega^2 \sin \theta - \delta v) - mL \cos \theta u \right]\end{cases}\, ,$$where common denominator $D$ is given by $D = m L^2 (M + m ( 1 - \cos^2 \theta ) )$.Dynamics of this system has two **fixed points**, that may be considered as convenient **control targets**. These two states consist of an immobile cart, with pendulum that does not rotate, and is vertical (either pointing straight up, or straight down). Formally, we require that $v = \omega = 0$, and either $\theta = 0$, or $theta = \pi$. Such states (targets) do not depend on the position of the cart, and it may be arbitrary ($x \in \mathbb{R}$).
###Code
def get_dynamics(params, u=0):
M, m, L, g, d, _ = params
def inv_pend(t, Y):
sinth = math.sin(Y[2])
costh = math.cos(Y[2])
D = m * L * L * (M + m * (1 - (costh * costh)))
dY = np.array([
Y[1], # dx
(1./D) * (-(m * m * L * L * g * costh * sinth) + (m * L * L * (m * L * Y[3] * Y[3] * sinth - d * Y[1]) ) ) + (m * L * L * (1./D) * u), # dv
Y[3], # dtheta
(1./D) * (((m + M) * m * g * L * sinth) - (m * L * costh * (m * L * Y[3] * Y[3] * sinth - d * Y[1]))) - (m * L * costh * (1./D) * u) # domega
])
return dY
return inv_pend
###Output
_____no_output_____
###Markdown
Rozwiązywanie układu i wizualizacja wynikówBelow, we numerically integrate non-linear systems of ordinary differential equations (ODEs) that describe the system for given **time interval**: $[t_0, t_1]$, and certain resolution of time steps $\Delta t$. By default, we shall use: $t_0 = 0, t_1 = 20, \Delta t = 0.1$.Also, we need to choose some **initial conditions**: $Y_0 = [ x_0 \; v_0 \; \theta_0 \; \omega_0 ]$, that our system starts with. By default, we shall use: $Y_0 = [ 0 \; 0 \; \pi \; 0.5 ]$.
###Code
def get_timespan(*, t0 = 0.0, t1 = 20.0, dt = 0.1):
return [t0, t1], np.arange(t0, t1 + dt, dt)
def get_init_cond(*, x0 = 0.0, v0 = 0.0, th0 = math.pi, w0 = 0.5):
return np.array([x0, v0, th0, w0])
# integrating system of ODEs representing dynamics of the system
par = get_params()
Y0 = get_init_cond()
ts, teval = get_timespan()
dyn = get_dynamics(par)
res = solve_ivp(dyn, ts, Y0, t_eval=teval)
# visualisation - simple plots of values of state variables
simple_plot(res.t, res.y)
# visualisation - plots of (2d projections) of state space associated with either cart, or pendulum.
phase_plots(res.y)
from matplotlib import animation, rc
from IPython.display import HTML
def sim_animation(Y, params, stride = 1):
L = params[2]
fig, ax = plt.subplots(figsize=(15, 10))
plt.grid()
plt.xlabel('x')
ax.set_xlim(( -5, 5))
ax.set_ylim(( - L, L + 1))
p1, = plt.plot([],[])
p2, = plt.plot([],[], c='#000000')
p3 = plt.scatter([],[],s=10,c='#55ff00',marker='s')
p4 = plt.scatter([],[],s=3,c='#000000',marker='o')
plots = [p1, p2, p3, p4]
def init():
p1.set_data([-15, 15], [0, 0])
p3.set_sizes([10000])
p4.set_sizes([2000])
return plots
def animate(frame):
cart_x = Y[0, frame * stride]
cart_y = 0.5
head_x = cart_x + L * math.sin(math.pi - Y[2, frame * stride])
head_y = cart_y + L * math.cos(math.pi - Y[2, frame * stride])
p2.set_data([cart_x, head_x], [cart_y, head_y])
p3.set_offsets([[cart_x, cart_y]])
p4.set_offsets([[head_x, head_y]])
return plots
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=Y.shape[1] // stride, interval=20, blit=True)
HTML(anim.to_html5_video())
rc('animation', html='html5')
return anim
# visualisation - animation of the evolution of a system
sim_animation(res.y, par)
###Output
_____no_output_____
###Markdown
2. Linearisation and ControllabilityBrief inspection of equations describing our system show that they are essentially non-linear, and quite complex. To address the issue of its control and controllability, we shall use simpler, linear model, obtained through the process of **linearisation**. We can do that with respect to both fixed points that were pointed out before. Interestingly, it turns out that systems linearised with respect to either of those points exhibit similar structure, and differ only by the sign of some of the coefficients. For that reason we introduce another parameter $b = \pm 1$ that allows to easily distinguish these two cases and potential control targets. In further considerations, we shall focus on the case for $b=+1$, i.e. where pendulum arm points upwards.Our system, in its linearised form, is as follows:$$ \frac{dY}{dt} = \mathbf{A} Y + \mathbf{B} u \; , $$where:$\mathbf{A} = \begin{bmatrix}0 & 1 & 0 & 0 \\0 & - \frac{\delta}{M} & \frac{bmg}{M} & 0 \\0 & 0 & 0 & 1 \\0 & - \frac{b \delta}{ML} & - \frac{b(m+M)g}{ML} & 0\end{bmatrix}\quad \text{oraz} \quad\mathbf{B} = \begin{bmatrix} 0 \\ \frac{1}{M} \\ 0 \\\frac{b}{ML}\end{bmatrix}\; .$
###Code
def get_linear(params):
M, m, L, g, d, b = params
A = np.array([[0.0, 1.0, 0.0, 0.0],
[0.0, - d / M, b * m * g / M, 0.0],
[0.0, 0.0, 0.0, 1.0],
[0.0, -b * d / (M * L), -b * (m + M) * g / (M * L), 0.0]])
B = np.array([[0.0], [1.0 / M], [0.0], [b / (M * L)]])
return A, B
A, B = get_linear(get_params())
print("A = ")
print(A)
print("\nB = ")
print(B)
###Output
A =
[[ 0. 1. 0. 0. ]
[ 0. -0.2 -2. 0. ]
[ 0. 0. 0. 1. ]
[ 0. -0.1 6. 0. ]]
B =
[[0. ]
[0.2]
[0. ]
[0.1]]
###Markdown
StabilityBrief inspection of plots and animations showing the evolution of our system without any control provides a clue that it is not very chaotic and not stable. That observation is confirmed by the analysis of eigenvelues of the matrix $\mathbf{A}$ of linearised system (see code below). This instability is suggested by the presence of an eigenvalue that is positive.
###Code
eigs = eigvals(A)
print("eigs =", eigs)
###Output
eigs = [ 0. +0.j -2.431123 +0.j -0.23363938+0.j 2.46476238+0.j]
###Markdown
ControllabilityBefore we shall consider admissible controls for our system, we must address the question whether it is controllable at all. Since we have linearised form of our system at hand, we may easily calculate its controllability matrix. Then, the issue of controllability boils down to the question whether the rank of controllability matrix is equal to the number of dimensions of our state vector (here it is $4$). Calculations done in code below show that, indeed, our system (at least with default parameters) is controllable.
###Code
ctrb_mx = ctrb(A, B)
ctrb_rk = matrix_rank(ctrb_mx)
if ctrb_rk == 4:
print("System is controllable.")
else:
print("System is not controllable.")
###Output
System is controllable.
###Markdown
3. Examples of Controls that Stabilise the SystemWe know that this system is controllable. This means that there exists such $u = -\mathbf{K}Y$that when we consider the following system:$$ \frac{dY}{dt} = (\mathbf{A} - \mathbf{BK})Y \;$$we can find such matrix (tj. $\mathbf{A} - \mathbf{BK}$) that all of its eigenvalues are non-positive.In particular, we have access to *Matlab* function ```place(A, B, eigs)``` that for given matrices $\mathbf{A}$ and $\mathbf{B}$, as well as any given, expected eigenvalues (```eigs```), we get such matrix $\mathbf{K}$.Below, we showcase some, increasingly "aggressive" (i.e. with more negative eigenvalues) examples of control.
###Code
# alternative functions that returns equations describing the dynamics of a system with control using matrix K
# with given eigenvalues
def get_dynamics_for_eigs(params, ctrl_eigs):
M, m, L, g, d, b = params
A, B = get_linear(params)
K = place(A, B, ctrl_eigs)
def inv_pend(t, Y):
sinth = math.sin(Y[2])
costh = math.cos(Y[2])
D = m * L * L * (M + m * (1 - (costh * costh)))
Y_ref = np.array([1.0, 0.0, (b + 1.0) * 0.5 * math.pi, 0.0]) # pozycja referencyjna (tj. cel; x = 1)
u = - np.dot(K, Y - Y_ref)
dY = np.array([
Y[1], # dx
(1./D) * (-(m * m * L * L * g * costh * sinth) + (m * L * L * (m * L * Y[3] * Y[3] * sinth - d * Y[1]) ) ) + (m * L * L * (1./D) * u), # dv
Y[3], # dtheta
(1./D) * (((m + M) * m * g * L * sinth) - (m * L * costh * (m * L * Y[3] * Y[3] * sinth - d * Y[1]))) - (m * L * costh * (1./D) * u) # domega
])
return dY
return K, inv_pend
Y0_eigs = get_init_cond(x0=-3.0, v0 = 0.0, th0=math.pi+0.1, w0=0.0)
ts_eigs, t_eval_eigs = get_timespan(dt=0.001)
eig1 = np.array([-0.01, -0.02, -0.03, -0.04])
K1, dyn1 = get_dynamics_for_eigs(par, eig1)
res1 = solve_ivp(dyn1, ts_eigs, Y0_eigs, t_eval=t_eval_eigs)
print("K =", K1)
simple_plot(res1.t, res1.y)
phase_plots(res1.y)
sim_animation(res1.y, par, stride=100)
eig2 = np.array([-0.3, -0.4, -0.5, -0.6])
K2, dyn2 = get_dynamics_for_eigs(par, eig2)
res2 = solve_ivp(dyn2, ts_eigs, Y0_eigs, t_eval=t_eval_eigs)
print("K =", K2)
simple_plot(res2.t, res2.y)
phase_plots(res2.y)
sim_animation(res2.y, par, stride=50)
eig3 = np.array([-1.0, -1.1, -1.2, -1.3])
K3, dyn3 = get_dynamics_for_eigs(par, eig3)
res3 = solve_ivp(dyn3, ts_eigs, Y0_eigs, t_eval=t_eval_eigs)
print("K =", K3)
simple_plot(res3.t, res3.y)
phase_plots(res3.y)
sim_animation(res3.y, par, stride=50)
eig4 = np.array([-2.0, -2.1, -2.2, -2.3])
K4, dyn4 = get_dynamics_for_eigs(par, eig4)
res4 = solve_ivp(dyn4, ts_eigs, Y0_eigs, t_eval=t_eval_eigs)
print("K =", K4)
simple_plot(res4.t, res4.y)
phase_plots(res4.y)
sim_animation(res4.y, par, stride=50)
eig5 = np.array([-3.0, -3.1, -3.2, -3.3])
K5, dyn5 = get_dynamics_for_eigs(par, eig5)
res5 = solve_ivp(dyn5, ts_eigs, Y0_eigs, t_eval=t_eval_eigs)
print("K =", K5)
simple_plot(res5.t, res5.y)
phase_plots(res5.y)
sim_animation(res5.y, par, stride=50)
eig6 = np.array([-3.5, -3.6, -3.7, -3.8])
K6, dyn6 = get_dynamics_for_eigs(par, eig6)
res6 = solve_ivp(dyn6, ts_eigs, Y0_eigs, t_eval=t_eval_eigs)
print("K =", K6)
simple_plot(res6.t, res6.y)
phase_plots(res6.y)
sim_animation(res6.y, par, stride=50)
# TOO LARGE EIGENVALUES - unstable system
#eig7 = np.array([-5.0, -5.1, -5.2, -5.3])
#K7, dyn7 = get_dynamics_for_eigs(par, eig7)
#res7 = solve_ivp(dyn7, ts_eigs, Y0_eigs, t_eval=t_eval_eigs)
#print(K7)
#simple_plot(res7.t, res7.y)
#phase_plots(res7.y)
###Output
_____no_output_____
###Markdown
4. Optimal ControlWe still have to address the question of finding the optimal control for our system (i.e. matrix $\mathbf{K}$ giving rise to optimal control), given some costs and priorities, i.e. how we value how quickly our system stabilises, or whether we want to minimise the amount of energy spent on using control.To calculate such control - for our linear system (in linearised form) with costs expressed as quadratic forms - we can use **linear-quadratic regulator (LQR)**.Detailed form and derivation of LQR may be found in in Evans' lectures ([2], s. 81-83). Let's just recall that optimality of found control must be with respect to some cost functional. Here, it has the following form:$$ \mathcal{J}(u) = \int_0^{\infty} \left( Y^T \mathbf{Q} Y + u^T R u \right) dt \; . $$In this case, $u \in \mathbb{R}$, so $R \in \mathbb{R}$ is simply a scalar value. That parameter describes how costly it is to use the control (i.e. exert force in either direction to adjust cart velocity), e.g. cost of energy of fuel used for the steering. Symmetric matrix $\mathbf{Q} \in \mathbb{R}^{4 \times 4}$ determines how much penalty is associated with errors on respective state variables (the higher the value, the more important respective state variable and its value with respect to given target is important).
###Code
# alternative functions that returns equations describing the dynamics of a system with optimal LQR control
# for given values of parameters R and Q (default value - identity matrix)
def get_dynamics_for_lqr(params, Q_lqr=None, R_lqr=0.0001):
M, m, L, g, d, b = params
A, B = get_linear(params)
if Q_lqr is None:
Q_lqr = np.eye(4)
K, _, eigs = lqr(A, B, Q_lqr, R_lqr)
def inv_pend(t, Y):
sinth = math.sin(Y[2])
costh = math.cos(Y[2])
D = m * L * L * (M + m * (1 - (costh * costh)))
Y_ref = np.array([1.0, 0.0, (b + 1.0) * 0.5 * math.pi, 0.0]) # pozycja referencyjna (tj. cel; x = 1)
u = - np.dot(K, Y - Y_ref)
dY = np.array([
Y[1], # dx
(1./D) * (-(m * m * L * L * g * costh * sinth) + (m * L * L * (m * L * Y[3] * Y[3] * sinth - d * Y[1]) ) ) + (m * L * L * (1./D) * u), # dv
Y[3], # dtheta
(1./D) * (((m + M) * m * g * L * sinth) - (m * L * costh * (m * L * Y[3] * Y[3] * sinth - d * Y[1]))) - (m * L * costh * (1./D) * u) # domega
])
return dY
return K, eigs, inv_pend
ts_lqr, t_eval_lqr = get_timespan(dt=0.001)
K_lqr, eigs_lqr, dyn_lqr = get_dynamics_for_lqr(par)
res_lqr = solve_ivp(dyn_lqr, ts_lqr, Y0_eigs, t_eval=t_eval_lqr)
print(K_lqr)
print(eigs_lqr)
simple_plot(res_lqr.t, res_lqr.y)
phase_plots(res_lqr.y)
sim_animation(res_lqr.y, par, stride=50)
ts_lqr, t_eval_lqr = get_timespan(dt=0.001)
Q = np.diag([1.0, 1.0, 10.0, 100.0])
R = 0.001
K_lqr2, eigs_lqr2, dyn_lqr2 = get_dynamics_for_lqr(par, Q, R)
res_lqr2 = solve_ivp(dyn_lqr2, ts_lqr, Y0_eigs, t_eval=t_eval_lqr)
print(K_lqr2)
print(eigs_lqr2)
simple_plot(res_lqr2.t, res_lqr2.y)
phase_plots(res_lqr2.y)
sim_animation(res_lqr2.y, par, stride=50)
###Output
_____no_output_____
|
chapter-generative-adversarial-networks/colab_pt_DCGAN/DCGAN_ptlr3.ipynb
|
###Markdown
d2l.set_figsize((4, 4))for X, y in data_iter: imgs = X[0:20,:,:,:].permute(0, 2, 3, 1)/2+0.5 d2l.show_images(imgs, num_rows=4, num_cols=5) break
###Code
class G_block(nn.Module):
def __init__(self, channels, nz=3, kernel_size=4, strides=2,
padding=1, **kwargs):
super(G_block, self).__init__(**kwargs)
self.conv2d = nn.ConvTranspose2d(
nz, channels, kernel_size, strides, padding, bias=False)
self.batch_norm = nn.BatchNorm2d(channels)
self.activation = nn.ReLU()
def forward(self, X):
return self.activation(self.batch_norm(self.conv2d(X)))
x = torch.zeros((2, 3, 16, 16))
g_blk = G_block(20)
g_blk(x).shape
x = torch.zeros((2, 3, 1, 1))
g_blk = G_block(20, strides=1, padding=0)
g_blk(x).shape
def Conv2DTranspose(channels, kernel_size, strides, padding, use_bias, nc=3):
return nn.ConvTranspose2d(nc, channels, kernel_size=kernel_size,stride=strides, padding=padding, bias=use_bias)
n_G = 64
net_G = nn.Sequential(
G_block(n_G*8, nz=100, strides=1, padding=0), # Output: (64 * 8, 4, 4)
G_block(n_G*4, n_G*8), # Output: (64 * 4, 8, 8)
G_block(n_G*2, n_G*4), # Output: (64 * 2, 16, 16)
G_block(n_G, n_G*2), # Output: (64, 32, 32)
Conv2DTranspose(
3, nc=n_G, kernel_size=4, strides=2, padding=1, use_bias=False),
nn.Tanh() # Output: (3, 64, 64)
)
print(net_G)
x = torch.zeros((1, 100, 1, 1))
net_G(x).shape
alphas = [0, 0.2, 0.4, .6, .8, 1]
x = torch.arange(-2, 1, 0.1)
Y = [nn.LeakyReLU(alpha)(x).numpy() for alpha in alphas]
d2l.plot(x.numpy(), Y, 'x', 'y', alphas)
class D_block(nn.Module):
def __init__(self, channels, nc=3, kernel_size=4, strides=2,
padding=1, alpha=0.2, **kwargs):# nc: in_channels
super(D_block, self).__init__(**kwargs)
self.conv2d = nn.Conv2d(
nc, channels, kernel_size, strides, padding, bias=False)
self.batch_norm = nn.BatchNorm2d(channels)
self.activation = nn.LeakyReLU(alpha)
def forward(self, X):
return self.activation(self.batch_norm(self.conv2d(X)))
x = torch.zeros((2, 3, 16, 16))
d_blk = D_block(20)
d_blk(x).shape
def Conv2D(channels, kernel_size, use_bias, nc=3):
return nn.Conv2d(nc, channels, kernel_size=kernel_size, bias=use_bias)
n_D = 64
net_D = nn.Sequential(
D_block(n_D), # Output: (64, 32, 32)
D_block(n_D*2, n_D), # Output: (64 * 2, 16, 16)
D_block(n_D*4, n_D*2), # Output: (64 * 4, 8, 8)
D_block(n_D*8, n_D*4), # Output: (64 * 8, 4, 4)
Conv2D(1, nc=n_D*8, kernel_size=4, use_bias=False) # Output: (1, 1, 1)
)
print(net_D)
x = torch.zeros((1, 3, 64, 64))
net_D(x).shape
def update_D(X, Z, net_D, net_G, loss, trainer_D):
"""Update discriminator."""
batch_size = X.shape[0]
ones = torch.ones((batch_size, 1, 1, 1), device=d2l.try_gpu())
zeros = torch.zeros((batch_size, 1, 1, 1), device=d2l.try_gpu())
trainer_D.zero_grad()
real_Y = net_D(X)
fake_X = net_G(Z)
# Do not need to compute gradient for `net_G`, detach it from
# computing gradients.
fake_Y = net_D(fake_X.detach())
loss_D = (loss(real_Y, ones) + loss(fake_Y, zeros)) / 2
loss_D.backward()
trainer_D.step()
return loss_D
def update_G(Z, net_D, net_G, loss, trainer_G): #@save
"""Update generator."""
batch_size = Z.shape[0]
ones = torch.ones((batch_size,1,1,1), device=d2l.try_gpu())
# We could reuse `fake_X` from `update_D` to save computation
fake_X = net_G(Z)
# Recomputing `fake_Y` is needed since `net_D` is changed
fake_Y = net_D(fake_X)
loss_G = loss(fake_Y, ones)
loss_G.backward()
trainer_G.step()
return loss_G
def train(net_D, net_G, data_iter, num_epochs, lr, latent_dim, device=d2l.try_gpu()):
loss = nn.BCEWithLogitsLoss()
for w in net_D.parameters():
nn.init.normal_(w, 0, 0.02)
for w in net_G.parameters():
nn.init.normal_(w, 0, 0.02)
net_D = net_D.to(device)
net_G = net_G.to(device)
trainer_hp = {'lr': lr, 'betas': [0.5,0.999]}
trainer_D = torch.optim.Adam(net_D.parameters(), **trainer_hp)
trainer_G = torch.optim.Adam(net_G.parameters(), **trainer_hp)
animator = d2l.Animator(xlabel='epoch', ylabel='loss',
xlim=[1, num_epochs], nrows=2, figsize=(5, 5),
legend=['discriminator', 'generator'])
animator.fig.subplots_adjust(hspace=0.3)
for epoch in range(1, num_epochs + 1):
print('Epoch',epoch)
# Train one epoch
timer = d2l.Timer()
metric = d2l.Accumulator(3) # loss_D, loss_G, num_examples
for X, _ in data_iter:
print('Processing batch')
batch_size = X.shape[0]
Z = torch.normal(0, 1, size=(batch_size, latent_dim, 1, 1))
X, Z = X.to(device), Z.to(device)
metric.add(update_D(X, Z, net_D, net_G, loss, trainer_D),
update_G(Z, net_D, net_G, loss, trainer_G),
batch_size)
# Show generated examples
Z = torch.normal(0, 1, size=(21, latent_dim, 1, 1), device=device)
# Normalize the synthetic data to N(0, 1)
fake_x = net_G(Z).permute(0, 2, 3, 1) / 2 + 0.5
imgs = torch.cat(
[torch.cat([fake_x[i * 7 + j].cpu().detach() for j in range(7)], dim=1)
for i in range(len(fake_x)//7)], dim=0)
animator.axes[1].cla()
animator.axes[1].imshow(imgs)
# Show the losses
loss_D, loss_G = metric[0] / metric[2], metric[1] / metric[2]
animator.add(epoch, (loss_D, loss_G))
print(f'loss_D {loss_D:.3f}, loss_G {loss_G:.3f}, '
f'{metric[2] / timer.stop():.1f} examples/sec on {str(device)}')
latent_dim, lr, num_epochs = 100, 0.005, 20
train(net_D, net_G, data_iter, num_epochs, lr, latent_dim)
###Output
_____no_output_____
|
level-1/level1.ipynb
|
###Markdown
Records Count
###Code
print (f'Number of records (rows) = {date_df.count()}')
###Output
Number of records (rows) = 5101693
###Markdown
Transform Dataset to fit tables in schema file
###Code
dropna_review_ebooks_df = date_df.dropna()
dropna_review_ebooks_df.show()
print (f'Number of records (rows) after dropna = {review_ebooks_df.count()}')
# Load in a sql function to use columns
from pyspark.sql.functions import col
# Create user dataframe to match review_id_etable table
review_id_etable_df = dropna_review_ebooks_df.select(["review_id", "customer_id", "product_id", "product_parent", col("date").alias("review_date")])
review_id_etable_df.show()
# Create user dataframe to match products table
products_df = dropna_review_ebooks_df.select(["product_id", "product_title"]).distinct()
products_df.show()
print (f'Number of products = {products_df.count()}')
# Created data frame to match customer table -- Customer table for first data set
# CREATE TABLE customers (customer_id INT PRIMARY KEY NOT NULL UNIQUE, customer_count INT);
# from pyspark.sql.types import IntegerType
# from pyspark.sql.functions import col
counts_df = dropna_review_ebooks_df.groupBy("customer_id").count().orderBy("customer_id")
counts_df.show(10)
from pyspark.sql.types import IntegerType
# Change the name of the column and the data type
customers_df = counts_df.select(["customer_id", col("count").cast(IntegerType()).alias("customer_count")])
# Check the data types
customers_df.dtypes
customers_df.show(10)
# Create user dataframe to match vine_table table
vine_table_df = dropna_review_ebooks_df.select(["review_id", "star_rating", "helpful_votes", "total_votes", "vine"])
vine_table_df.show()
###Output
+--------------+-----------+-------------+-----------+----+
| review_id|star_rating|helpful_votes|total_votes|vine|
+--------------+-----------+-------------+-----------+----+
| RGYFDX8QXKEIR| 4| 0| 0| N|
|R13CBGTMNV9R8Z| 4| 1| 2| N|
| R7DRFHC0F71O0| 5| 0| 0| N|
|R27LUKEXU3KBXQ| 5| 1| 1| N|
|R1VXTPUYMNU687| 5| 1| 2| N|
|R30DKW1GJWLPZC| 3| 1| 2| N|
|R18DPFG2FALJI9| 5| 0| 0| N|
|R24D677N5WBW5Q| 5| 0| 0| N|
|R2FCJ9BQLSIOR3| 5| 0| 0| N|
|R1R6K4MAKDWTXI| 4| 0| 0| N|
|R3R5DILCWM8J7B| 5| 0| 0| N|
| RR5K72IZOCOFE| 4| 0| 0| N|
|R3K9PJU5GLDY3O| 5| 1| 2| N|
|R1KTZMCDOJXAEK| 5| 0| 0| N|
|R3SBEH4Y3W9W11| 5| 0| 0| N|
|R3GB8WOHSWW2EG| 3| 0| 0| N|
| RDM68WMOEDNRJ| 5| 0| 0| N|
|R3TW70YF2WZK9O| 5| 0| 0| N|
|R39ESX43X1SA5T| 1| 36| 47| N|
|R3MCA5W3BZ65OU| 5| 0| 0| N|
+--------------+-----------+-------------+-----------+----+
only showing top 20 rows
###Markdown
Postgres Setup
###Code
# Configure settings for RDS
mode = "append"
jdbc_url="jdbc:postgresql://<endpoiny>:<port>/<db>"
config = {"user":"<user>",
"password": "<pwd>",
"driver":"org.postgresql.Driver"}
# Write DataFrame to active_user table in RDS
review_id_etable_df.write.jdbc(url=jdbc_url, table='e_review_id_table', mode=mode, properties=config)
products_df.write.jdbc(url=jdbc_url, table='e_products', mode=mode, properties=config)
customers_df.write.jdbc(url=jdbc_url, table='e_customers', mode=mode, properties=config)
# Write DataFrame to active_user table in RDS
vine_table_df.write.jdbc(url=jdbc_url, table='e_vine_table', mode=mode, properties=config)
###Output
_____no_output_____
|
notebooks/09.02-OPTIONAL-Widget Events 2 -- Separating Concerns.ipynb
|
###Markdown
*OPTIONAL* Separating the logic from the widgetsA key principle in designing a graphical user interface is to separate the logic of an application from the graphical widgets the user sees. For example, in the super-simple password generator widget, the basic logic is to construct a sequence of random letters given the length. Let's isolate that logic in a function, without any widgets. This function takes a password length and returns a generated password string.
###Code
def calculate_password(length):
import string
import secrets
# Gaenerate a list of random letters of the correct length.
password = ''.join(secrets.choice(string.ascii_letters) for _ in range(length))
return password
###Output
_____no_output_____
###Markdown
Test out the function a couple times in the cell below with different lengths. Note that unlike our first pass through this, you can test this function without defining any widgets. This means you can write tests for just the logic, use the function as part of a library, etc.
###Code
calculate_password(10)
###Output
_____no_output_____
###Markdown
The Graphical ControlsThe code to build the graphical user interface widgets is the same as the previous iteration.
###Code
helpful_title = widgets.HTML('Generated password is:')
password_text = widgets.HTML('No password yet')
password_text.layout.margin = '0 0 0 20px'
password_length = widgets.IntSlider(description='Length of password',
min=8, max=20,
style={'description_width': 'initial'})
password_widget = widgets.VBox(children=[helpful_title, password_text, password_length])
password_widget
###Output
_____no_output_____
###Markdown
Connecting the logic to the widgetsWhen the slider `password_length` changes, we want to call `calculate_password` to come up with a new password, and set the value of the widget `password` to the return value of the function call.`update_password` takes the change from the `password_length` as its argument and sets the `password_text` with the result of `calculate_password`.
###Code
def update_password(change):
length = int(change.new)
new_password = calculate_password(length)
# NOTE THE LINE BELOW: it relies on the password widget already being defined.
password_text.value = new_password
password_length.observe(update_password, names='value')
###Output
_____no_output_____
###Markdown
Now that the connection is made, try moving the slider and you should see the password update.
###Code
password_widget
###Output
_____no_output_____
###Markdown
Benefits of separating concernsSome advantages of this approach are:+ Changes in `ipywidgets` only affect your controls setup.+ Changes in functional logic only affect your password generation function. If you decide that a password with only letters isn't secure enough and decide to add some numbers and/or special characters, the only code you need to change is in the `calculate_password` function.+ You can write unit tests for your `calculate_password` function -- which is where the important work is being done -- without doing in-browser testing of the graphical controls. Using interactNote that using interact to build this GUI also emphasizes the separation between the logic and the controls. However, interact also is much more opinionated about how the controls are laid out: controls are in a vbox above the output of the function. Often this is great for a quick initial GUI, but is restrictive for more complex GUIs.
###Code
from ipywidgets import interact
from IPython.display import display
interact(calculate_password, length=(8, 20));
###Output
_____no_output_____
###Markdown
We can make the interact a bit nicer by printing the result, rather than just returning the string. This time we use `interact` as a decorator.
###Code
@interact(length=(8, 20))
def print_password(length):
print(calculate_password(length))
###Output
_____no_output_____
|
10.2_california_housing.ipynb
|
###Markdown
Does the `fit_residual` idea work on other datasets?**TLDR;** When trying to predict MedHouseVal with the california housing dataset, creating a dense model with `fit_residual` makes it worse.When `fit_residual=True`, we build a model that has direct access to the results of a baseline model - in the hope that the model can learn to correct the errors of the baseline with a result that is;- more accurate than without `fit_residual` and/or- achieved with less compute / smaller models.We had some [good results](https://github.com/pete88b/deep_learning_with_python/blob/main/10.2_temperature_forecasting_part2.ipynb) using `fit_residual` to forcast temperature but ... california housing is quite different;- there is no time dimension- there is no common sense baseline - or maybe I just don't have enough common sense to see it (o: - so we try using `LinearRegression` and `DecisionTreeRegressor` as a baseline Note: Using `tensorflow/tensorflow:latest-gpu-jupyter` we can `pip install sklearn pandas` via a Jupyter terminal.
###Code
from sklearn.datasets import fetch_california_housing
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeRegressor, plot_tree
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from tensorflow.keras import callbacks
try:
from utils.plot_history import *
except ModuleNotFoundError:
if not Path('plot_history.py').is_file():
!wget https://raw.githubusercontent.com/pete88b/deep_learning_with_python/main/utils/plot_history.py
from plot_history import *
###Output
Matplotlib created a temporary config/cache directory at /tmp/matplotlib-loqir5fw because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
###Markdown
Quick look at the data
###Code
df = fetch_california_housing(data_home='data', as_frame=True)['frame']
df
df.describe()
###Output
_____no_output_____
###Markdown
Load the data as numpy
###Code
dataset = fetch_california_housing(data_home='data')
X_full, y_full, feature_names, target_names = [dataset[k] for k in ['data', 'target', 'feature_names', 'target_names']]
X_full.shape, y_full.shape, feature_names, target_names
###Output
_____no_output_____
###Markdown
Work out how many samples we'll have in each dataset
###Code
n_train_samples = len(y_full) // 2
n_val_samples = len(y_full) // 4
n_test_samples = len(y_full) - n_train_samples - n_val_samples
###Output
_____no_output_____
###Markdown
Spilt data into train, val, test and scale inputs
###Code
def split_and_scale_data(X, y):
X_train, y_train = X[:n_train_samples], y[:n_train_samples]
X_val, y_val = X[n_train_samples:n_train_samples+n_val_samples], y[n_train_samples:n_train_samples+n_val_samples]
X_test, y_test = X[n_train_samples+n_val_samples:], y[n_train_samples+n_val_samples:]
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_val, X_test = [scaler.transform(X) for X in [X_val, X_test]]
return X_train, y_train, X_val, y_val, X_test, y_test, scaler
X_train, y_train, X_val, y_val, X_test, y_test, scaler = split_and_scale_data(X_full, y_full)
X_train.shape, y_train.shape, X_val.shape, y_val.shape
###Output
_____no_output_____
###Markdown
Create a baseline model`LinearRegression` is quite a bit more accurate than `DecisionTreeRegressor` so we'll use it as our baseline
###Code
ml_model = LinearRegression().fit(X_train, y_train)
print('Val MAE', mean_absolute_error(y_val, ml_model.predict(X_val)))
print('Test MAE', mean_absolute_error(y_test, ml_model.predict(X_test)))
###Output
Val MAE 0.5054755000767698
Test MAE 0.5753288612466615
###Markdown
I was hoping that a really small tree would be good enough - but `DecisionTreeRegressor(max_depth=3)` gives us Val MAE 0.61 and Test MAE 0.68. Note sure what a good baseline is for this problem, but I think we need to do better than that (o:Note: Using the `DecisionTreeRegressor` hurt the dense model less than `LinearRegression`, when `fit_residual=True`.
###Code
# ml_model = DecisionTreeRegressor(min_samples_leaf=40).fit(X_train, y_train)
# print('Val MAE', mean_absolute_error(y_val, ml_model.predict(X_val)))
# print('Test MAE', mean_absolute_error(y_test, ml_model.predict(X_test)))
# plt.figure(figsize=(16,5))
# plot_tree(ml_model, fontsize=10) # Plot the whole tree even though we can only read the first few branches
# plt.show()
def new_model(fit_residual=False):
"Create a simple fully connected nn"
kernel_regularizer = regularizers.l2(1e-2)
inputs = layers.Input(shape=(X_train.shape[-1]))
x = keras.layers.Dense(64, activation='relu', kernel_regularizer=kernel_regularizer)(inputs)
x = keras.layers.Dropout(0.5)(x)
x = keras.layers.Dense(64, activation='relu', kernel_regularizer=kernel_regularizer)(x)
x = keras.layers.Dropout(0.5)(x)
x = keras.layers.Dense(1)(x)
if fit_residual:
x = tf.expand_dims(inputs[:, -1] * scaler.scale_[-1] + scaler.mean_[-1], -1) + x
model = keras.models.Model(inputs, x)
model.compile(
optimizer='rmsprop',
loss=keras.losses.mean_squared_error,
metrics=['mae']
)
return model
def compile_and_fit(model, model_tag):
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
kwargs = dict(monitor='val_mae', verbose=1)
cbs = [callbacks.ModelCheckpoint(f'data/models/california_{model_tag}.keras',
save_best_only=True, **kwargs),
callbacks.EarlyStopping(patience=2, **kwargs)]
history = model.fit(X_train, y_train,
batch_size=128,
epochs=20,
validation_data=(X_val, y_val),
callbacks=cbs)
del history.history['loss'] # we'll just plot mae
plot_history(history, ignore_first_n=0)
def evaluate_model(model_tag):
model = keras.models.load_model(f'data/models/california_{model_tag}.keras')
print(f"Test MAE: {model.evaluate(X_test, y_test)[1]:.2f} for model tag: {model_tag}")
model = new_model()
compile_and_fit(model, 'dense_0')
evaluate_model('dense_0')
###Output
162/162 [==============================] - 0s 1ms/step - loss: 0.6468 - mae: 0.5113
Test MAE: 0.51 for model tag: dense_0
###Markdown
Add the "ML model" predictions as an extra feature
###Code
X_full_1 = np.concatenate([X_full, ml_model.predict(X_full)[:,None]], axis=1)
X_train, y_train, X_val, y_val, X_test, y_test, scaler = split_and_scale_data(X_full_1, y_full)
X_train.shape, y_train.shape, X_val.shape, y_val.shape
###Output
_____no_output_____
###Markdown
Now we can create a model that uses `fit_residual`
###Code
model = new_model(True)
compile_and_fit(model, 'dense_1')
evaluate_model('dense_1')
###Output
162/162 [==============================] - 0s 1ms/step - loss: 273.3255 - mae: 2.6836
Test MAE: 2.68 for model tag: dense_1
###Markdown
Lets take a look at some predictions
###Code
import pandas as pd
model_0, model_1 = [keras.models.load_model(f'data/models/california_dense_{i}.keras') for i in [0,1]]
df = pd.DataFrame(dict(y_test=y_test,
ml_model=ml_model.predict(X_test[:, :8]),
model_0=np.squeeze(model_0.predict(X_test[:, :8])),
model_1=np.squeeze(model_1.predict(X_test))))
df.insert(2, 'ml_model_diff', df['ml_model'] - df['y_test'])
df.insert(4, 'model_0_diff', df['model_0'] - df['y_test'])
df['model_1_diff'] = df['model_1'] - df['y_test']
###Output
_____no_output_____
###Markdown
See where the dense model has the least agreement with the tree
###Code
df.sort_values(by='model_0_diff', key=abs, ascending=False)
###Output
_____no_output_____
###Markdown
See where the dense model with `fit_residual` has the least agreement with the treeWe don't have to look very hard to see that the model with `fit_residual` is making some really bad predictions ↓
###Code
df.sort_values(by='model_1_diff', key=abs, ascending=False)
###Output
_____no_output_____
|
GeoParser/.ipynb_checkpoints/georeferenceCalculator_Selenium-checkpoint.ipynb
|
###Markdown
Vocabulario Para realizar la georreferenciación es importante utilizar un vocabulario controlado, el cual se presenta a continuación:- Coordinate Source: - gazetteer - Google Earth/Maps <-2008 - Google Earth/Maps >2008 - GPS - locality description - USGS map: 1:250000 - USGS map: 1:100000 - USGS map: 1:63360 - USGS map: 1:62500 - USGS map: 1:25000 - USGS map: 1:24000 - USGS map: 1:12000 - USGS map: 1:10000 - USGS map: 1:4800 - USGS map: 1:2400 - USGS map: 1:1200 - NTS map (A): 1:250000 - NTS map (B): 1:250000 - NTS map (C): 1:250000 - NTS map (A): 1:50000 - NTS map (B): 1:50000 - NTS map (C): 1:50000 - other map: 1:3000000 - other map: 1:2500000 - other map: 1:1000000 - other map: 1:500000 - other map: 1:250000 - other map: 1:200000 - other map: 1:180000 - other map: 1:150000 - other map: 1:125000 - other map: 1:100000 - other map: 1:80000 - other map: 1:62500 - other map: 1:60000 - other map: 1:50000 - other map: 1:40000 - other map: 1:32500 - other map: 1:25000 - other map: 1:20000 - other map: 1:10000- Coordinate System - decimal degrees - degrees minutes seconds - degrees decimal minutes- Datum - (WGS84) World Geodetic System 1984 - Abidjan 1987 - Accra - Aden 1925 - Adindan - Afgooye - Ain el Abd 1970 - Airy 1830 ellipsoid - Airy Modified 1849 ellipsoid - Alaska (NAD27) - Alaska/Canada (NAD27) - Albanian 1987 - American Samoa 1962 - Amersfoort - Ammassalik 1958 - Anguilla 1957 - Anna 1 Astro 1965 - Antigua Island Astro 1943 - Aratu - Arc 1950 mean - Arc 1960 mean - Ascension Island 1958 - Astro Beacon \E\ 1945 - Astro DOS 71/4 - Astro Tern Island (FRIG) 1961 - Astronomic Station No. 1 1951 - Astronomic Station No. 2 1951, Truk Island - Astronomic Station Ponape 1951 - Astronomical Station 1952 - Australian Antarctic Datum 1998 - (AGD66) Australian Geodetic Datum 1966 - (AGD84) Australian Geodetic Datum 1984 - Australian National ellipsoid - Autonomous Regions of Portugal 2008 - Average Terrestrial System 1977 ellipsoid - Ayabelle Lighthouse - Azores Central Islands 1948 - Azores Central Islands 1995 - Azores Occidental Islands 1939 - Azores Oriental Islands 1940 - Azores Oriental Islands 1995 - Bahamas (NAD27) - Barbados 1938 - Batavia - Beduaram - Beijing 1954 - Bekaa Valley 1920 (IGN) - Bellevue (IGN) - Bermuda 1957 - Bermuda 2000 - Bessel 1841 ellipsoid (Namibia) - Bessel 1841 ellipsoid (non-Namibia) - Bessel Modified ellipsoid BM - Bhutan National Geodetic Datum - Bioko - Bissau - Bogota 1975 - Bukit Rimpah - Bulgaria Geodetic System 2005 - Cadastre 1997 - Camacupa - Camp Area Astro - Campo Inchauspe - Canada Mean (NAD27) - Canal Zone (NAD27) - Canton Astro 1966 - Cape - Cape Canaveral mean - Caribbean (NAD27) - Carthage - Cayman Islands Geodetic Datum 2011 - Central America (NAD27) - Centre Spatial Guyanais 1967 - CH1903 - CH1903+ - Chatham Island Astro 1971 - Chatham Islands Datum 1979 - Chua Astro - Clarke 1858 ellipsoid - Clarke 1866 ellipsoid - Clarke 1880 ellipsoid - Clarke 1880 (Benoit) ellipsoid - Clarke 1880 (international foot) ellipsoid - Co-Ordinate System 1937 of Estonia - Cocos Islands 1965 - Combani 1950 - Conakry 1905 - Congo 1960 Pointe Noire - Corrego Alegre 1961 - Corrego Alegre 1970-72 - Costa Rica 2005 - Cuba (NAD27) - Croatian Terrestrial Reference System - Cyprus - Cyprus Geodetic Reference System 1993 - Dabola 1981 - Danish 1876 - Datum 73 - Datum Geodesi Nasional 1995 - Dealul Piscului 1930 - Deception Island - Deir ez Zor - Deutsches Hauptdreiecksnetz - Diego Garcia 1969 - Djakarta (Batavia) - DOS 1968 - Dominica 1945 - Douala 1948 - Easter Island 1967 - Egypt 1907 - Egypt Gulf of Suez S-650 TL - Estonia 1992 - Estonia 1997 - European 1950 - European 1950 mean - European Datum 1950(1977) - European 1979 mean - European Datum 1987 - European Libyan Datum 1979 - European Terrestrial Reference System 1989 - Everest ellipsoid (Brunei, Sabah, Sarawak) - Everest ellipsoid (W. Malaysia, Singapore 1948) - Everest 1830 (1937 Adjustment) ellipsoid India - Everest India 1856 ellipsoid - Everest 1830 (1962 Definition) ellipsoid India - Everest 1830 (1975 Definition) ellipsoid India - Everest Pakistan ellipsoid - Everest W. Malaysia 1969 ellipsoid - Fahud - Fatu Iva 72 - Fehmarnbelt Datum 2010 - Fiji 1956 - Fiji Geodetic Datum 1986 - Final Datum 1958 - Finnish Nautical Chart - Fort Marigot - Fort Thomas 1955 - Gambia - Gan 1970 - Gandajika Base - Gan 1970 - Geocentric Datum Brunei Darussalam 2009 - Geocentric Datum of Australia 1994 - Geocentric Datum of Australia 2020 - Geocentric Datum of Korea - Geodetic Datum of 1965 - Geodetic Datum 1949 - Geodetic Reference System 1967 ellipsoid - Geodetic Reference System 1967 Modified ellipsoid - Geodetic Reference System 1980 ellipsoid - Ghana - Graciosa Base SW 1948 - Grand Cayman Geodetic Datum 1959 - Grand Comoros - Greek Geodetic Reference System 1987 - Greenland (NAD27) - Greenland 1996 - Grenada 1953 - Guadeloupe 1948 - Guam 1963 - Gulshan 303 - Gunung Segara - Gunung Serindung 1962 - GUX 1 Astro - Hanoi 1972 - Hartebeesthoek94 - Helle 1954 - Helmert 1906 ellipsoid - Herat North - Hito XVIII 1963 - Hjorsey 1955 - Hong Kong 1963(67) - Hong Kong 1980 - Hong Kong Geodetic - Hough 1960 ellipsoid - Hu-Tzu-Shan 1950 - Hungarian Datum 1909 - Hungarian Datum 1972 - IGN 1962 Kerguelen - IGN53 Mare - IGN56 Lifou - IGN63 Hiva Oa - IGN72 Grande Terre - IGN72 Nuku Hiva - Indian - Indian 1954 - Indian 1960 - Indian 1975 - Indonesian 1974 ellipsoid - Indonesian Datum 1974 - Institut Geographique du Congo Belge 1955 - International 1924 ellipsoid - Iran - Iraqi Geospatial Reference System - Iraq-Kuwait Boundary Datum 1992 - Ireland 1965 - INET95 - Islands Net 1993 - Islands Net 2004 - Israel 1993 - Istituto Geografico Militaire 1995 - ISTS 061 Astro 1968 - ISTS 073 Astro 1969 - Iwo Jima 1945 - Jamaica 1969 - Jamaica 2001 - Japanese Geodetic Datum 2000 - Johnston Island 1961 - Jouik 1961 - Kalianpur 1937 - Kalianpur 1962 - Kalianpur 1975 - Kandawala - Kapingamarangi Astronomic Station No. 3 1951 - Karbala 1979 - Kartastokoordinaattijarjestelma (1966) - Katanga 1955 - Kerguelen Island 1949 - Kertau 1948 - Kertau 1968 - Korean Datum 1985 - Korean Geodetic System 1995 - Kosovo Reference System 2001 - Krassowsky 1940 ellipsoid - Kusaie Astro 1951 - Kuwait Oil Company - Kuwait Utility - L.C. 5 Astro 1961 - La Canoa - La Reunion - Lao National Datum 1997 - Latvia 1992 - Le Pouce 1934 - Leigon - Lemuta - Liberia 1964 - Libyan Geodetic Datum 2006 - Lisbon 1890 - Lisbon 1937 - Lithuania 1994 (ETRS89) - Locodjo 1965 - Luxembourg 1930 - Luzon 1911 - Macao 1920 - Macao Geodetic Datum 2008 - Mahe 1971 - Makassar - Malongo 1987 - Manoca 1962 - Marco Astro - Marco Geocentrico Nacional de Referencia - Marco Geodesico Nacional de Bolivia - Marcus Island 1952 - Marshall Islands 1960 - Martinique 1938 - Masirah Is. (Nahrwan) - Massawa - Maupiti 83 - Mauritania 1999 - Merchich - Mexico (NAD27) - Mexico IT2008 - Mexico IT92 - MGI 1901 - Midway Astro 1961 - Militar-Geographische Institut - Mindanao - Minna - Modified Fischer 1960 ellipsoid - MOLDR99 - MOMRA Terrestrial Reference Frame 2000 - Monte Mario - Montjong Lowe - Montserrat Island Astro 1958 - Moorea 87 - MOP78 - Moznet (IT94) - M'Poraloko - Nahrwan - Nahrwan 1934 - Nahrwan 1967 - Nakhl-e Ghanem - Naparima 1955 - Naparima 1972 - Naparima, BWI - National Geodetic Network - N74 Noumea - Nepal 1981 - New Zealand Geodetic Datum 1949 - New Zealand Geodetic Datum 2000 - NGO 1948 BM - (NAD83) North American Datum 1983 - NAD83 (High Accuracy Reference Network) - NAD83 (National Spatial Reference System 2007) - NAD83 Canadian Spatial Reference System - (NAD27) North American 1927 mean - North American Datum 1927 - North American Datum 1927 (1976) - North American Datum 1927 (CGQ77) - Nord Sahara 1959 - Nouakchott 1965 - Nouvelle Triangulation Francaise - Observatorio Meteorologico 1939 - Observatorio 1966 - Ocotepeque 1935 - Old Egyptian 1907 - Old Hawaiian mean - Old Hawaiian Kauai - Old Hawaiian Maui - Old Hawaiian Oahu - Old Trinidad 1903 - Oman - Oman National Geodetic Datum 2014 - Ordnance Survey of Great Britain 1936 - Ordnance Survey of Northern Ireland 1952 - Padang 1884 - Palestine 1923 - Pampa del Castillo - Papua New Guinea Geodetic Datum 1994 - Parametry Zemli 1990 PZ - PDO Survey Datum 1993 - Peru96 - Petrels 1972 - Philippine Reference System 1992 - Phoenix Islands 1966 - Pico de las Nieves 1984 - Pitcairn 2006 - Pitcairn Astro 1967 - Point 58 - Point Noire 1958 - Pointe Geologie Perroud 1950 - Porto Santo 1936 - Porto Santo 1995 - Posiciones Geodesicas Argentinas 1994 - Posiciones Geodesicas Argentinas 1998 - Posiciones Geodesicas Argentinas 2007 - Potsdam Datum/83 - Potsdam Rauenberg DH - Provisional South American 1956 - Provisional South Chilean 1963 - Puerto Rico - Pulkovo 1942 - Pulkovo 1942(58) - Pulkovo 1942(83) - Pulkovo 1995 - PZ-90 PZ - Qatar 1974 - Qatar National Datum 1995 - Qornoq 1927 - Rassadiran - Rauenberg Datum/83 - Red Geodesica de Canarias 1995 - Red Geodesica Venezolana - Reseau de Reference des Antilles Francaises 1991 - Reseau Geodesique de la Polynesie Francaise - Reseau Geodesique de la C 2005 - Reseau Geodesique de la Reunion 1992 - Reseau Geodesique de Mayotte 2004 - Reseau Geodesique de Nouvelle Caledonie 91-93 - Reseau Geodesique de Saint Pierre et Miquelon 2006 - Reseau Geodesique des Antilles Francaises 2009 - Reseau Geodesique Francais 1993 - Reseau Geodesique Francais Guyane 1995 - Reseau National Belge 1972 - Rete Dinamica Nazionale 2008 - Reunion 1947 - Reykjavik 1900 - Rikets koordinatsystem 1990 - Rome 1940 - Ross Sea Region Geodetic Datum 2000 - S-42 - S-JTSK - Saint Pierre et Miquelon 1950 - Santo (DOS) 1965 - Sao Braz - Sapper Hill 1943 - Schwarzeck - Scoresbysund 1952 - Selvagem Grande 1938 - Serbian Reference Network 1998 - Serbian Spatial Reference System 2000 - Sicily - Sierra Leone 1960 - Sierra Leone 1968 - SIRGAS_ES2007.8 - SIRGAS-Chile - SIRGAS-ROU98 - Sistema de Referencia Geocentrico para America del Sur 1995 - Sistema de Referencia Geocentrico para las Americas 2000 - Sistema Geodesico Nacional de Panama MACARIO SOLIS - Sister Islands Geodetic Datum 1961 - Slovenia Geodetic Datum 1996 - Solomon 1968 - South American 1969 ellipsoid - South American Datum 1969 - South American Datum 1969(96) - South Asia - South East Island 1943 - South Georgia 1968 - South Yemen - Southeast Base - Sri Lanka Datum 1999 - St. George Island - St. Helena Geodetic Datum 2015 - St. Helena Tritan - St. Kitts 1955 - St. Lawrence Island - St. Lucia 1955 - St. Paul Island - St. Vincent 1945 - ST71 Belep - ST84 Ile des Pins - ST87 Ouvea - SVY21 - SR99 - Swiss Terrestrial Reference Frame 1995 - System of the Unified Trigonometrical Cadastral Ne - Tahaa 54 - Tahiti 52 - Tahiti 79 - Taiwan Datum 1997 - Tananarive Observatory 1925 - Tern Island 1961 - Tete - Timbalai 1948 - TM65 - Tokyo - Trinidad 1903 - Tristan Astro 1968 - Turkish National Reference Frame - Ukraine 2000 - United Arab Emirates (Nahrwan) - Vanua Levu 1915 - Vietnam 2000 - Viti Levu 1912 - Viti Levu 1916 - Voirol 1874 - Voirol 1875 - Voirol 1960 - (WGS66) World Geodetic System 1966 WC - (WGS66) World Geodetic System 1966 ellipsoid WC - (WGS72) World Geodetic System 1972 - (WGS72) World Geodetic System 1972 ellipsoid - (WGS72) Transit Broadcast Ephemeris - Wake Island Astro 1952 - Wake-Eniwetok 1960 - War Office ellipsoid WO - Yacare - Yemen National Geodetic Network 1996 - Yoff - Zanderij - Measurement Error - Un numero real con punto como decimal. Ej: 100, 100.0, 50.354 - Distance Units - km - m - mi - yds - ft - nm Como ingresar los valoresComo esto se basa en la georreferenciación para bases de datos en el estándar Darwin Core se agregará esta información en el campo "georeferenceRemarks" y se debe dar información de los siguientes campos mencionados anteriormente. - Coordinate Source- Measurement Error- Distance UnitsLa información del resto de los campos mencionados anteriormente será extraída de la misma base de datos de los campos que se muestran a continuación.- Coordinate System -> verbatimCoordinateSystem- Latitude -> verbatimLatitude- Longitude -> verbatimLongitude- Datum -> verbatimSRS Es por esto la importancia de utilizar un vocabulario controlado. Para facilitar el uso de esta tarea se recomienda utilizar softwares como OpenRefine Ejemplo de ingreso de valores se usará para ingresar estos el camppo dynamicProperties**Si es que no conoces la información de Coordinate Source utiliza locality description"** links utiles- https://selenium-python.readthedocs.io/navigating.htmlinteracting-with-the-page
###Code
from GeoParser import *
import pandas as pd
df=pd.read_csv("sel_test.csv",sep=",")
for index in df.index:
verbatimLatitude=df.loc[index,"verbatimLatitude"]
verbatimLongitude=df.loc[index,"verbatimLongitude"]
dynamicProperties=df.loc[index,"dynamicProperties"]
verbatimCoordinteSystem=df.loc[index,"verbatimCoordinteSystem"]
verbatimSRS=df.loc[index,"verbatimSRS"]
SeleniumGeoref(verbatimCoordinteSystem,verbatimLatitude,verbatimLongitude,dynamicProperties,verbatimSRS).georeferencer()
{'decimalLatitude': '-33.440207', 'decimalLongitude': '-70.82527', 'coordinateUncertaintyInMeters': '5', 'geodeticDatum': '(WGS84) World Geodetic System 1984', 'coordinatePrecision': '0.0000001', 'georeferencerDate': '2021-03-10T04:54:44.193Z'}
{'decimalLatitude': '-33.439617', 'decimalLongitude': '-70.825892', 'coordinateUncertaintyInMeters': '5', 'geodeticDatum': '(WGS84) World Geodetic System 1984', 'coordinatePrecision': '0.0000001', 'georeferencerDate': '2021-03-10T04:54:45.626Z'}
{'decimalLatitude': '-33.438857', 'decimalLongitude': '-70.827066', 'coordinateUncertaintyInMeters': '5', 'geodeticDatum': '(WGS84) World Geodetic System 1984', 'coordinatePrecision': '0.0000001', 'georeferencerDate': '2021-03-10T04:54:46.761Z'}
{'decimalLatitude': '-33.441809', 'decimalLongitude': '-70.824534', 'coordinateUncertaintyInMeters': '5', 'geodeticDatum': '(WGS84) World Geodetic System 1984', 'coordinatePrecision': '0.0000001', 'georeferencerDate': '2021-03-10T04:54:48.379Z'}
{'decimalLatitude': '-33.443833', 'decimalLongitude': '-70.824534', 'coordinateUncertaintyInMeters': '5', 'geodeticDatum': '(WGS84) World Geodetic System 1984', 'coordinatePrecision': '0.0000001', 'georeferencerDate': '2021-03-10T04:54:50.278Z'}
{'decimalLatitude': '-33.436871', 'decimalLongitude': '-70.828026', 'coordinateUncertaintyInMeters': '5', 'geodeticDatum': '(WGS84) World Geodetic System 1984', 'coordinatePrecision': '0.0000001', 'georeferencerDate': '2021-03-10T04:54:51.579Z'}
{'decimalLatitude': '-33.484893', 'decimalLongitude': '-70.92268', 'coordinateUncertaintyInMeters': '5', 'geodeticDatum': '(WGS84) World Geodetic System 1984', 'coordinatePrecision': '0.0000001', 'georeferencerDate': '2021-03-10T04:54:52.691Z'}
{'decimalLatitude': '-33.484708', 'decimalLongitude': '-70.92242', 'coordinateUncertaintyInMeters': '5', 'geodeticDatum': '(WGS84) World Geodetic System 1984', 'coordinatePrecision': '0.0000001', 'georeferencerDate': '2021-03-10T04:54:53.916Z'}
{'decimalLatitude': '-33.486615', 'decimalLongitude': '-70.921183', 'coordinateUncertaintyInMeters': '5', 'geodeticDatum': '(WGS84) World Geodetic System 1984', 'coordinatePrecision': '0.0000001', 'georeferencerDate': '2021-03-10T04:54:55.139Z'}
{'decimalLatitude': '-33.487443', 'decimalLongitude': '-70.920062', 'coordinateUncertaintyInMeters': '5', 'geodeticDatum': '(WGS84) World Geodetic System 1984', 'coordinatePrecision': '0.0000001', 'georeferencerDate': '2021-03-10T04:54:56.185Z'}
{'decimalLatitude': '-33.486966', 'decimalLongitude': '-70.919583', 'coordinateUncertaintyInMeters': '5', 'geodeticDatum': '(WGS84) World Geodetic System 1984', 'coordinatePrecision': '0.0000001', 'georeferencerDate': '2021-03-10T04:54:57.183Z'}
{'decimalLatitude': '-33.48621', 'decimalLongitude': '-70.920949', 'coordinateUncertaintyInMeters': '6', 'geodeticDatum': '(WGS84) World Geodetic System 1984', 'coordinatePrecision': '0.0000001', 'georeferencerDate': '2021-03-10T04:54:58.420Z'}
{'decimalLatitude': '-33.483692', 'decimalLongitude': '-70.924028', 'coordinateUncertaintyInMeters': '5', 'geodeticDatum': '(WGS84) World Geodetic System 1984', 'coordinatePrecision': '0.0000001', 'georeferencerDate': '2021-03-10T04:54:59.488Z'}
{'decimalLatitude': '-39.7912417', 'decimalLongitude': '-72.2346389', 'coordinateUncertaintyInMeters': '5', 'geodeticDatum': '(WGS84) World Geodetic System 1984', 'coordinatePrecision': '0.0000001', 'georeferencerDate': '2021-03-10T04:55:01.949Z'}
driver.quit()
###Output
_____no_output_____
|
labs/07_seq2seq/Translation_of_Numeric_Phrases_with_Seq2Seq.ipynb
|
###Markdown
Translation of Numeric Phrases with Seq2SeqIn the following we will try to build a **translation model from french phrases describing numbers** to the corresponding **numeric representation** (base 10).This is a toy machine translation task with a **restricted vocabulary** and a **single valid translation for each source phrase** which makes it more tractable to train on a laptop computer and easier to evaluate. Despite those limitations we expect that this task will highlight interesting properties of Seq2Seq models including:- the ability to **deal with different length** of the source and target sequences,- handling token with a **meaning that changes depending on the context** (e.g "quatre" vs "quatre vingts" in "quatre cents"),- basic counting and "reasoning" capabilities of LSTM and GRU models.The parallel text data is generated from a "ground-truth" Python function named `to_french_phrase` that captures common rules. Hyphenation was intentionally omitted to make the phrases more ambiguous and therefore make the translation problem slightly harder to solve (and also because Olivier had no particular interest hyphenation in properly implementing rules :).
###Code
from french_numbers import to_french_phrase
for x in [21, 80, 81, 300, 213, 1100, 1201, 301000, 80080]:
print(str(x).rjust(6), to_french_phrase(x))
###Output
_____no_output_____
###Markdown
Generating a Training SetThe following will **generate phrases 20000 example phrases for numbers between 1 and 1,000,000** (excluded). We chose to over-represent small numbers by generating all the possible short sequences between 1 and `exhaustive`.We then split the generated set into non-overlapping train, validation and test splits.
###Code
from french_numbers import generate_translations
from sklearn.model_selection import train_test_split
numbers, french_numbers = generate_translations(
low=1, high=int(1e6) - 1, exhaustive=5000, random_seed=0)
num_train, num_dev, fr_train, fr_dev = train_test_split(
numbers, french_numbers, test_size=0.5, random_state=0)
num_val, num_test, fr_val, fr_test = train_test_split(
num_dev, fr_dev, test_size=0.5, random_state=0)
len(fr_train), len(fr_val), len(fr_test)
for i, fr_phrase, num_phrase in zip(range(5), fr_train, num_train):
print(num_phrase.rjust(6), fr_phrase)
for i, fr_phrase, num_phrase in zip(range(5), fr_val, num_val):
print(num_phrase.rjust(6), fr_phrase)
###Output
_____no_output_____
###Markdown
VocabulariesBuild the vocabularies from the training set only to get a chance to have some out-of-vocabulary words in the validation and test sets.First we need to introduce specific symbols that will be used to:- pad sequences- mark the beginning of translation- mark the end of translation- be used as a placehold for out-of-vocabulary symbols (not seen in the training set).Here we use the same convention as the [tensorflow seq2seq tutorial](https://www.tensorflow.org/tutorials/seq2seq):
###Code
PAD, GO, EOS, UNK = START_VOCAB = ['_PAD', '_GO', '_EOS', '_UNK']
###Output
_____no_output_____
###Markdown
To build the vocabulary we need to tokenize the sequences of symbols. For the digital number representation we use character level tokenization while whitespace-based word level tokenization will do for the French phrases:
###Code
def tokenize(sentence, word_level=True):
if word_level:
return sentence.split()
else:
return [sentence[i:i + 1] for i in range(len(sentence))]
tokenize('1234', word_level=False)
tokenize('mille deux cent trente quatre', word_level=True)
###Output
_____no_output_____
###Markdown
Let's now use this tokenization strategy to assign a unique integer token id to each possible token string found the traing set in each language ('French' and 'numeric'):
###Code
def build_vocabulary(tokenized_sequences):
rev_vocabulary = START_VOCAB[:]
unique_tokens = set()
for tokens in tokenized_sequences:
unique_tokens.update(tokens)
rev_vocabulary += sorted(unique_tokens)
vocabulary = {}
for i, token in enumerate(rev_vocabulary):
vocabulary[token] = i
return vocabulary, rev_vocabulary
tokenized_fr_train = [tokenize(s, word_level=True) for s in fr_train]
tokenized_num_train = [tokenize(s, word_level=False) for s in num_train]
fr_vocab, rev_fr_vocab = build_vocabulary(tokenized_fr_train)
num_vocab, rev_num_vocab = build_vocabulary(tokenized_num_train)
###Output
_____no_output_____
###Markdown
The two languages do not have the same vocabulary sizes:
###Code
len(fr_vocab)
len(num_vocab)
for k, v in sorted(fr_vocab.items())[:10]:
print(k.rjust(10), v)
print('...')
for k, v in sorted(num_vocab.items()):
print(k.rjust(10), v)
###Output
_____no_output_____
###Markdown
We also built the reverse mappings from token ids to token string representations:
###Code
print(rev_fr_vocab)
print(rev_num_vocab)
###Output
_____no_output_____
###Markdown
Seq2Seq with a single GRU architectureFrom: [Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. "Sequence to sequence learning with neural networks." NIPS 2014](https://arxiv.org/abs/1409.3215)For a given source sequence - target sequence pair, we will:- tokenize the source and target sequences;- reverse the order of the source sequence;- build the input sequence by concatenating the reversed source sequence and the target sequence in original order using the `_GO` token as a delimiter, - build the output sequence by appending the `_EOS` token to the source sequence.Let's do this as a function using the original string representations for the tokens so as to make it easier to debug: **Exercise**- Build a function which adapts a pair of tokenized sequences to the framework above.- The function should have a reverse_source as an option.*Note*: - The function should output two sequences of string tokens: one to be fed as the input and the other as expected output for the seq2seq network. We will handle the padding later;- Don't forget to insert the `_GO` and `_EOS` special symbols at the right locations.
###Code
def make_input_output(source_tokens, target_tokens, reverse_source=True):
# TOTO
return input_tokens, output_tokens
# %load solutions/make_input_output.py
input_tokens, output_tokens = make_input_output(
['cent', 'vingt', 'et', 'un'],
['1', '2', '1'],
)
input_tokens
output_tokens
###Output
_____no_output_____
###Markdown
Vectorization of the parallel corpusLet's apply the previous transformation to each pair of (source, target) sequene and use a shared vocabulary to store the results in numpy arrays of integer token ids, with padding on the left so that all input / output sequences have the same length:
###Code
all_tokenized_sequences = tokenized_fr_train + tokenized_num_train
shared_vocab, rev_shared_vocab = build_vocabulary(all_tokenized_sequences)
import numpy as np
max_length = 20 # found by introspection of our training set
def vectorize_corpus(source_sequences, target_sequences, shared_vocab,
word_level_source=True, word_level_target=True,
max_length=max_length):
assert len(source_sequences) == len(target_sequences)
n_sequences = len(source_sequences)
source_ids = np.empty(shape=(n_sequences, max_length), dtype=np.int32)
source_ids.fill(shared_vocab[PAD])
target_ids = np.empty(shape=(n_sequences, max_length), dtype=np.int32)
target_ids.fill(shared_vocab[PAD])
numbered_pairs = zip(range(n_sequences), source_sequences, target_sequences)
for i, source_seq, target_seq in numbered_pairs:
source_tokens = tokenize(source_seq, word_level=word_level_source)
target_tokens = tokenize(target_seq, word_level=word_level_target)
in_tokens, out_tokens = make_input_output(source_tokens, target_tokens)
in_token_ids = [shared_vocab.get(t, UNK) for t in in_tokens]
source_ids[i, -len(in_token_ids):] = in_token_ids
out_token_ids = [shared_vocab.get(t, UNK) for t in out_tokens]
target_ids[i, -len(out_token_ids):] = out_token_ids
return source_ids, target_ids
X_train, Y_train = vectorize_corpus(fr_train, num_train, shared_vocab,
word_level_target=False)
X_train.shape
Y_train.shape
fr_train[0]
num_train[0]
X_train[0]
Y_train[0]
###Output
_____no_output_____
###Markdown
This looks good. In particular we can note:- the PAD=0 symbol at the beginning of the two sequences,- the input sequence has the GO=1 symbol to separate the source from the target,- the output sequence is a shifted version of the target and ends with EOS=2.Let's vectorize the validation and test set to be able to evaluate our models:
###Code
X_val, Y_val = vectorize_corpus(fr_val, num_val, shared_vocab,
word_level_target=False)
X_test, Y_test = vectorize_corpus(fr_test, num_test, shared_vocab,
word_level_target=False)
X_val.shape, Y_val.shape
X_test.shape, Y_test.shape
###Output
_____no_output_____
###Markdown
A simple homogeneous Seq2Seq architectureTo keep the architecture simple we will use the **same RNN model and weights for both the encoder part** (before the `_GO` token) **and the decoder part** (after the `_GO` token).We may GRU recurrent cell instead of LSTM because it is slightly faster to compute and should give comparable results.**Exercise:**- Build a Seq2Seq model: - Start with an Embedding layer; - Add a single GRU layer: the GRU layer should yield a sequence of output vectors, one at each timestep; - Add a Dense layer to adapt the ouput dimension of the GRU layer to the dimension of the output vocabulary; - Don't forget to insert some Dropout layer(s), especially after the Embedding layer.Note:- The output dimension of the Embedding layer should be smaller than usual be cause we have small vocabulary size;- The dimension of the GRU should be larger to give the Seq2Seq model enough "working memory" to memorize the full input sequence before decoding it;- Your model should output a shape `[batch, sequence_length, vocab_size]`.
###Code
from keras.models import Sequential
from keras.layers import Embedding, Dropout, GRU, Dense
vocab_size = len(shared_vocab)
simple_seq2seq = Sequential()
# TODO
# Here we use the sparse_categorical_crossentropy loss to be able to pass
# integer-coded output for the token ids without having to convert to one-hot
# codes
simple_seq2seq.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
# %load solutions/simple_seq2seq.py
###Output
_____no_output_____
###Markdown
Let's use a callback mechanism to automatically snapshot the best model found so far on the validation set:
###Code
from keras.callbacks import ModelCheckpoint
from keras.models import load_model
best_model_fname = "simple_seq2seq_checkpoint.h5"
best_model_cb = ModelCheckpoint(best_model_fname, monitor='val_loss',
save_best_only=True, verbose=1)
###Output
_____no_output_____
###Markdown
We need to use np.expand_dims trick on Y: this is required by Keras because of we use a sparse (integer-based) representation for the output:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
history = simple_seq2seq.fit(X_train, np.expand_dims(Y_train, -1),
validation_data=(X_val, np.expand_dims(Y_val, -1)),
epochs=15, verbose=2, batch_size=32,
callbacks=[best_model_cb])
plt.figure(figsize=(12, 6))
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], '--', label='validation')
plt.ylabel('negative log likelihood')
plt.xlabel('epoch')
plt.title('Convergence plot for Simple Seq2Seq')
###Output
_____no_output_____
###Markdown
Let's load the best model found on the validation set at the end of training:
###Code
simple_seq2seq = load_model(best_model_fname)
###Output
_____no_output_____
###Markdown
If you don't have access to a GPU and cannot wait 10 minutes to for the model to converge to a reasonably good state, feel to use the pretrained model. This model has been obtained by training the above model for ~150 epochs. The validation loss is significantly lower than 1e-5. In practice it should hardly ever make any prediction error on this easy translation problem.Alternatively we will load this imperfect model (trained only to 50 epochs) with a validation loss of ~7e-4. This model makes funny translation errors so I would suggest to try it first:
###Code
from keras.utils.data_utils import get_file
import os
get_file("simple_seq2seq_partially_pretrained.h5",
"https://github.com/m2dsupsdlclass/lectures-labs/releases/"
"download/0.4/simple_seq2seq_partially_pretrained.h5")
filename = os.path.expanduser(os.path.join('~',
'.keras/datasets/simple_seq2seq_partially_pretrained.h5'))
### Uncomment the following to replace for the fully trained network
#get_file("simple_seq2seq_pretrained.h5",
# "https://github.com/m2dsupsdlclass/lectures-labs/releases/"
# "download/0.4/simple_seq2seq_pretrained.h5")
#filename = os.path.expanduser(os.path.join('~',
# '.keras/datasets/simple_seq2seq_pretrained.h5'))
simple_seq2seq.load_weights(filename)
###Output
_____no_output_____
###Markdown
Let's have a look at a raw prediction on the first sample of the test set:
###Code
fr_test[0]
###Output
_____no_output_____
###Markdown
In numeric array this is provided (along with the expected target sequence) as the following padded input sequence:
###Code
first_test_sequence = X_test[0:1]
first_test_sequence
###Output
_____no_output_____
###Markdown
Remember that the `_GO` (symbol indexed at `1`) separates the reversed source from the expected target sequence:
###Code
rev_shared_vocab[1]
###Output
_____no_output_____
###Markdown
Interpreting the model prediction**Exercise **:- Feed this test sequence into the model. What is the shape of the output?- Get the argmax of each output prediction to get the most likely symbols- Dismiss the padding / end of sentence- Convert to readable vocabulary using rev_shared_vocab*Interpretation*- Compare the output with the first example in numerical format `num_test[0]`- What do you think of this way of decoding? Is it correct to use it at inference time?
###Code
# %load solutions/interpret_output.py
###Output
_____no_output_____
###Markdown
In the previous exercise we cheated a bit because we gave the complete sequence along with the solution in the input sequence. To correctly predict we need to predict one token at a time and reinject the predicted token in the input sequence to predict the next token:
###Code
def greedy_translate(model, source_sequence, shared_vocab, rev_shared_vocab,
word_level_source=True, word_level_target=True):
"""Greedy decoder recursively predicting one token at a time"""
# Initialize the list of input token ids with the source sequence
source_tokens = tokenize(source_sequence, word_level=word_level_source)
input_ids = [shared_vocab.get(t, UNK) for t in source_tokens[::-1]]
input_ids += [shared_vocab[GO]]
# Prepare a fixed size numpy array that matches the expected input
# shape for the model
input_array = np.empty(shape=(1, model.input_shape[1]),
dtype=np.int32)
decoded_tokens = []
while len(input_ids) <= max_length:
# Vectorize a the list of input tokens as
# and use zeros padding.
input_array.fill(shared_vocab[PAD])
input_array[0, -len(input_ids):] = input_ids
# Predict the next output: greedy decoding with argmax
next_token_id = model.predict(input_array)[0, -1].argmax()
# Stop decoding if the network predicts end of sentence:
if next_token_id == shared_vocab[EOS]:
break
# Otherwise use the reverse vocabulary to map the prediction
# back to the string space
decoded_tokens.append(rev_shared_vocab[next_token_id])
# Append prediction to input sequence to predict the next
input_ids.append(next_token_id)
separator = " " if word_level_target else ""
return separator.join(decoded_tokens)
phrases = [
"un",
"deux",
"trois",
"onze",
"quinze",
"cent trente deux",
"cent mille douze",
"sept mille huit cent cinquante neuf",
"vingt et un",
"vingt quatre",
"quatre vingts",
"quatre vingt onze mille",
"quatre vingt onze mille deux cent deux",
]
for phrase in phrases:
translation = greedy_translate(simple_seq2seq, phrase,
shared_vocab, rev_shared_vocab,
word_level_target=False)
print(phrase.ljust(40), translation)
###Output
_____no_output_____
###Markdown
Why does the partially trained network is able to correctly give the output for`"sept mille huit cent cinquante neuf"`but not for:`"cent mille douze"` ?The answer is as following:- it is rather easy for the network to learn a correspondance between symbols (first case), by dismissing `"cent"` and `"mille"`- outputing the right amount of symbols, especially `0s` for `"cent mille douze"` requires more reasoning and ability to count.
###Code
phrases = [
"quatre vingt et un",
"quarante douze",
"onze cent soixante vingt quatorze",
]
for phrase in phrases:
translation = greedy_translate(simple_seq2seq, phrase,
shared_vocab, rev_shared_vocab,
word_level_target=False)
print(phrase.ljust(40), translation)
###Output
_____no_output_____
###Markdown
Model evaluationBecause **we expect only one correct translation** for a given source sequence, we can use **phrase-level accuracy** as a metric to quantify our model quality.Note that **this is not the case for real translation models** (e.g. from French to English on arbitrary sentences). Evaluation of a machine translation model is tricky in general. Automated evaluation can somehow be done at the corpus level with the [BLEU score](https://en.wikipedia.org/wiki/BLEU) (bilingual evaluation understudy) given a large enough sample of correct translations provided by certified translators but its only a noisy proxy.The only good evaluation is to give a large enough sample of the model predictions on some test sentences to certified translators and ask them to give an evaluation (e.g. a score between 0 and 6, 0 for non-sensical and 6 for the hypothetical perfect translation). However in practice this is very costly to do.Fortunately we can just use phrase-level accuracy on a our very domain specific toy problem:
###Code
def phrase_accuracy(model, num_sequences, fr_sequences, n_samples=300,
decoder_func=greedy_translate):
correct = []
n_samples = len(num_sequences) if n_samples is None else n_samples
for i, num_seq, fr_seq in zip(range(n_samples), num_sequences, fr_sequences):
if i % 100 == 0:
print("Decoding %d/%d" % (i, n_samples))
predicted_seq = decoder_func(simple_seq2seq, fr_seq,
shared_vocab, rev_shared_vocab,
word_level_target=False)
correct.append(num_seq == predicted_seq)
return np.mean(correct)
print("Phrase-level test accuracy: %0.3f"
% phrase_accuracy(simple_seq2seq, num_test, fr_test))
print("Phrase-level train accuracy: %0.3f"
% phrase_accuracy(simple_seq2seq, num_train, fr_train))
###Output
_____no_output_____
###Markdown
Bonus: Decoding with a Beam SearchInstead of decoding with greedy strategy that only considers the most-likely next token at each prediction, we can hold a priority queue of the most promising top-n sequences ordered by loglikelihoods.This could potentially improve the final accuracy of an imperfect model: indeed it can be the case that the most likely sequence (based on the conditional proability estimated by the model) starts with a character that is not the most likely alone.**Bonus Exercise:**- build a beam_translate function which decodes candidate translations with a beam search strategy- use a list of candidates, tracking `beam_size` candidates and their corresponding likelihood- compute predictions for the next outputs by using predict with a batch of the size of the beam- be careful to stop appending results if EOS symbols have been found for each candidate!
###Code
def beam_translate(model, source_sequence, shared_vocab, rev_shared_vocab,
word_level_source=True, word_level_target=True,
beam_size=10, return_ll=False):
"""Decode candidate translations with a beam search strategy
If return_ll is False, only the best candidate string is returned.
If return_ll is True, all the candidate strings and their loglikelihoods
are returned.
"""
# %load solutions/beam_search.py
candidates = beam_translate(simple_seq2seq, "cent mille un",
shared_vocab, rev_shared_vocab,
word_level_target=False,
return_ll=True, beam_size=10)
candidates
candidates = beam_translate(simple_seq2seq, "quatre vingts",
shared_vocab, rev_shared_vocab,
word_level_target=False,
return_ll=True, beam_size=10)
candidates
###Output
_____no_output_____
###Markdown
Model Accuracy with Beam Search Decoding
###Code
print("Phrase-level test accuracy: %0.3f"
% phrase_accuracy(simple_seq2seq, num_test, fr_test,
decoder_func=beam_translate))
print("Phrase-level train accuracy: %0.3f"
% phrase_accuracy(simple_seq2seq, num_train, fr_train,
decoder_func=beam_translate))
###Output
_____no_output_____
###Markdown
Translation of Numeric Phrases with Seq2SeqIn the following we will try to build a **translation model from french phrases describing numbers** to the corresponding **numeric representation** (base 10).This is a toy machine translation task with a **restricted vocabulary** and a **single valid translation for each source phrase** which makes it more tractable to train on a laptop computer and easier to evaluate. Despite those limitations we expect that this task will highlight interesting properties of Seq2Seq models including:- the ability to **deal with different length** of the source and target sequences,- handling token with a **meaning that changes depending on the context** (e.g "quatre" vs "quatre vingts" in "quatre cents"),- basic counting and "reasoning" capabilities of LSTM and GRU models.The parallel text data is generated from a "ground-truth" Python function named `to_french_phrase` that captures common rules. Hyphenation was intentionally omitted to make the phrases more ambiguous and therefore make the translation problem slightly harder to solve (and also because Olivier had no particular interest hyphenation in properly implementing rules :).
###Code
from french_numbers import to_french_phrase
for x in [21, 80, 81, 300, 213, 1100, 1201, 301000, 80080]:
print(str(x).rjust(6), to_french_phrase(x))
###Output
_____no_output_____
###Markdown
Generating a Training SetThe following will **generate phrases 20000 example phrases for numbers between 1 and 1,000,000** (excluded). We chose to over-represent small numbers by generating all the possible short sequences between `1` and `exhaustive=5000`.We then split the generated set into non-overlapping train, validation and test splits.
###Code
from french_numbers import generate_translations
from sklearn.model_selection import train_test_split
numbers, french_numbers = generate_translations(
low=1, high=int(1e6) - 1, exhaustive=5000, random_seed=0)
num_train, num_dev, fr_train, fr_dev = train_test_split(
numbers, french_numbers, test_size=0.5, random_state=0)
num_val, num_test, fr_val, fr_test = train_test_split(
num_dev, fr_dev, test_size=0.5, random_state=0)
len(fr_train), len(fr_val), len(fr_test)
for i, fr_phrase, num_phrase in zip(range(5), fr_train, num_train):
print(num_phrase.rjust(6), fr_phrase)
for i, fr_phrase, num_phrase in zip(range(5), fr_val, num_val):
print(num_phrase.rjust(6), fr_phrase)
###Output
_____no_output_____
###Markdown
VocabulariesBuild the vocabularies from the training set only to get a chance to have some out-of-vocabulary words in the validation and test sets.First we need to introduce specific symbols that will be used to:- pad sequences- mark the beginning of translation- mark the end of translation- be used as a placehold for out-of-vocabulary symbols (not seen in the training set).Here we use the same convention as the [tensorflow seq2seq tutorial](https://www.tensorflow.org/tutorials/seq2seq):
###Code
PAD, GO, EOS, UNK = START_VOCAB = ['_PAD', '_GO', '_EOS', '_UNK']
###Output
_____no_output_____
###Markdown
To build the vocabulary we need to tokenize the sequences of symbols. For the digital number representation we use character level tokenization while whitespace-based word level tokenization will do for the French phrases:
###Code
def tokenize(sentence, word_level=True):
if word_level:
return sentence.split()
else:
return [sentence[i:i + 1] for i in range(len(sentence))]
tokenize('1234', word_level=False)
tokenize('mille deux cent trente quatre', word_level=True)
###Output
_____no_output_____
###Markdown
Let's now use this tokenization strategy to assign a unique integer token id to each possible token string found the traing set in each language ('French' and 'numeric'):
###Code
def build_vocabulary(tokenized_sequences):
rev_vocabulary = START_VOCAB[:]
unique_tokens = set()
for tokens in tokenized_sequences:
unique_tokens.update(tokens)
rev_vocabulary += sorted(unique_tokens)
vocabulary = {}
for i, token in enumerate(rev_vocabulary):
vocabulary[token] = i
return vocabulary, rev_vocabulary
tokenized_fr_train = [tokenize(s, word_level=True) for s in fr_train]
tokenized_num_train = [tokenize(s, word_level=False) for s in num_train]
fr_vocab, rev_fr_vocab = build_vocabulary(tokenized_fr_train)
num_vocab, rev_num_vocab = build_vocabulary(tokenized_num_train)
###Output
_____no_output_____
###Markdown
The two languages do not have the same vocabulary sizes:
###Code
len(fr_vocab)
len(num_vocab)
for k, v in sorted(fr_vocab.items())[:10]:
print(k.rjust(10), v)
print('...')
for k, v in sorted(num_vocab.items()):
print(k.rjust(10), v)
###Output
_____no_output_____
###Markdown
We also built the reverse mappings from token ids to token string representations:
###Code
print(rev_fr_vocab)
print(rev_num_vocab)
###Output
_____no_output_____
###Markdown
Seq2Seq with a single GRU architectureFrom: [Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. "Sequence to sequence learning with neural networks." NIPS 2014](https://arxiv.org/abs/1409.3215)For a given source sequence - target sequence pair, we will:- tokenize the source and target sequences;- reverse the order of the source sequence;- build the input sequence by concatenating the reversed source sequence and the target sequence in original order using the `_GO` token as a delimiter, - build the output sequence by appending the `_EOS` token to the source sequence.Let's do this as a function using the original string representations for the tokens so as to make it easier to debug: **Exercise**- Write a function that turns a pair of tokenized (source, target) sequences into a pair of (input, output) sequences as described above.- The function should have a `reverse_source=True` as an option.Notes: - The function should output two sequences of string tokens: one to be fed as the input and the other as expected output for the seq2seq network.- Do not pad the sequences: we will handle the padding later.- Don't forget to insert the `_GO` and `_EOS` special symbols at the right locations.
###Code
def make_input_output(source_tokens, target_tokens, reverse_source=True):
# TOTO
return input_tokens, output_tokens
# %load solutions/make_input_output.py
input_tokens, output_tokens = make_input_output(
['cent', 'vingt', 'et', 'un'],
['1', '2', '1'],
)
# Expected outputs:
# ['un', 'et', 'vingt', 'cent', '_GO', '1', '2', '1']
# ['1', '2', '1', '_EOS']
input_tokens
output_tokens
###Output
_____no_output_____
###Markdown
Vectorization of the parallel corpusLet's apply the previous transformation to each pair of (source, target) sequene and use a shared vocabulary to store the results in numpy arrays of integer token ids, with padding on the left so that all input / output sequences have the same length:
###Code
all_tokenized_sequences = tokenized_fr_train + tokenized_num_train
shared_vocab, rev_shared_vocab = build_vocabulary(all_tokenized_sequences)
max(len(s) for s in tokenized_fr_train)
max(len(s) for s in tokenized_num_train)
import numpy as np
max_length = 20 # found by introspection of our training set
def vectorize_corpus(source_sequences, target_sequences, shared_vocab,
word_level_source=True, word_level_target=True,
max_length=max_length):
assert len(source_sequences) == len(target_sequences)
n_sequences = len(source_sequences)
source_ids = np.empty(shape=(n_sequences, max_length), dtype=np.int32)
source_ids.fill(shared_vocab[PAD])
target_ids = np.empty(shape=(n_sequences, max_length), dtype=np.int32)
target_ids.fill(shared_vocab[PAD])
numbered_pairs = zip(range(n_sequences), source_sequences, target_sequences)
for i, source_seq, target_seq in numbered_pairs:
source_tokens = tokenize(source_seq, word_level=word_level_source)
target_tokens = tokenize(target_seq, word_level=word_level_target)
in_tokens, out_tokens = make_input_output(source_tokens, target_tokens)
in_token_ids = [shared_vocab.get(t, UNK) for t in in_tokens]
source_ids[i, -len(in_token_ids):] = in_token_ids
out_token_ids = [shared_vocab.get(t, UNK) for t in out_tokens]
target_ids[i, -len(out_token_ids):] = out_token_ids
return source_ids, target_ids
X_train, Y_train = vectorize_corpus(fr_train, num_train, shared_vocab,
word_level_target=False)
X_train.shape
X_train[0]
Y_train.shape
fr_train[0]
num_train[0]
X_train[0]
Y_train[0]
###Output
_____no_output_____
###Markdown
This looks good. In particular we can note:- the PAD=0 symbol at the beginning of the two sequences,- the input sequence has the GO=1 symbol to separate the source from the target,- the output sequence is a shifted version of the target and ends with EOS=2.Let's vectorize the validation and test set to be able to evaluate our models:
###Code
X_val, Y_val = vectorize_corpus(fr_val, num_val, shared_vocab,
word_level_target=False)
X_test, Y_test = vectorize_corpus(fr_test, num_test, shared_vocab,
word_level_target=False)
X_val.shape, Y_val.shape
X_test.shape, Y_test.shape
###Output
_____no_output_____
###Markdown
A simple homogeneous Seq2Seq architectureTo keep the architecture simple we will use the **same RNN model and weights for both the encoder part** (before the `_GO` token) **and the decoder part** (after the `_GO` token).We may GRU recurrent cell instead of LSTM because it is slightly faster to compute and should give comparable results.**Exercise:**- Build a Seq2Seq model: - Start with an Embedding layer; - Add a single GRU layer: the GRU layer should yield a sequence of output vectors, one at each timestep; - Add a Dense layer to adapt the ouput dimension of the GRU layer to the dimension of the output vocabulary; - Don't forget to insert some Dropout layer(s), especially after the Embedding layer.Note:- The output dimension of the Embedding layer should be smaller than usual be cause we have small vocabulary size;- The dimension of the GRU should be larger to give the Seq2Seq model enough "working memory" to memorize the full input sequence before decoding it;- Your model should output a shape `[batch, sequence_length, vocab_size]`.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, Dropout, GRU, Dense
vocab_size = len(shared_vocab)
simple_seq2seq = Sequential()
# TODO
# Here we use the sparse_categorical_crossentropy loss to be able to pass
# integer-coded output for the token ids without having to convert to one-hot
# codes
simple_seq2seq.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
# %load solutions/simple_seq2seq.py
###Output
_____no_output_____
###Markdown
**Questions**:- What is the expected shape of the output of the model when fed with input of length 20 tokens? What is the meaning of the values in the output of the model?- What is the shape of the output of each layer in the model?
###Code
# simple_seq2seq.predict(X_train[0:1]).shape
# simple_seq2seq.summary()
###Output
_____no_output_____
###Markdown
Let's register a callback mechanism to automatically snapshot the best model by measure the performance of the model on the validation set at the end of each epoch during training:
###Code
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.models import load_model
best_model_fname = "simple_seq2seq_checkpoint.h5"
best_model_cb = ModelCheckpoint(best_model_fname, monitor='val_loss',
save_best_only=True, verbose=1)
###Output
_____no_output_____
###Markdown
We need to use np.expand_dims trick on Y: this is required by Keras because of we use a sparse (integer-based) representation for the output:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
history = simple_seq2seq.fit(X_train, np.expand_dims(Y_train, -1),
validation_data=(X_val, np.expand_dims(Y_val, -1)),
epochs=15, verbose=2, batch_size=32,
callbacks=[best_model_cb])
plt.figure(figsize=(12, 6))
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], '--', label='validation')
plt.ylabel('negative log likelihood')
plt.xlabel('epoch')
plt.legend()
plt.title('Convergence plot for Simple Seq2Seq')
###Output
_____no_output_____
###Markdown
Let's load the best model found on the validation set at the end of training:
###Code
simple_seq2seq = load_model(best_model_fname)
###Output
_____no_output_____
###Markdown
If you don't have access to a GPU and cannot wait 10 minutes to for the model to converge to a reasonably good state, feel to use the pretrained model. This model has been obtained by training the above model for ~150 epochs. The validation loss is significantly lower than 1e-5. In practice it should hardly ever make any prediction error on this easy translation problem.Alternatively we will load this imperfect model (trained only to 50 epochs) with a validation loss of ~7e-4. This model makes funny translation errors so I would suggest to try it first:
###Code
from tensorflow.keras.utils import get_file
filename = get_file(
"simple_seq2seq_partially_pretrained.h5",
"https://github.com/m2dsupsdlclass/lectures-labs/releases/"
"download/0.4/simple_seq2seq_partially_pretrained.h5"
)
# Uncomment the following to replace for the fully trained network:
# filename= get_file(
# "simple_seq2seq_pretrained.h5",
# "https://github.com/m2dsupsdlclass/lectures-labs/releases/"
# "download/0.4/simple_seq2seq_pretrained.h5")
simple_seq2seq.load_weights(filename)
###Output
_____no_output_____
###Markdown
Let's have a look at a raw prediction on the first sample of the test set:
###Code
fr_test[0]
###Output
_____no_output_____
###Markdown
In numeric array this is provided (along with the expected target sequence) as the following padded input sequence:
###Code
first_test_sequence = X_test[0:1]
first_test_sequence
###Output
_____no_output_____
###Markdown
Remember that the `_GO` (symbol indexed at `1`) separates the reversed source from the expected target sequence:
###Code
rev_shared_vocab[1]
###Output
_____no_output_____
###Markdown
Interpreting the model prediction**Exercise**:- Feed this test sequence into the model. What is the shape of the output?- Get the argmax of each output prediction to get the most likely symbols- Dismiss the padding / end of sentence- Convert to readable vocabulary using `rev_shared_vocab`*Interpretation*- Compare the output with the first example in numerical format `num_test[0]`- What do you think of this way of decoding? Is it correct to use it at inference time?
###Code
# %load solutions/interpret_output.py
###Output
_____no_output_____
###Markdown
In the previous exercise we cheated because we gave the complete sequence along with the solution in the input sequence.To be more realistic we need to use the model in a setting where we do not provide any token of expected translation as part of the input sequence: the model shall predict one token at a time starting only from the source sequence along with the `` special symbol. At each step, we append the new predicted output token in the input sequence to predict the next token:
###Code
def greedy_translate(model, source_sequence, shared_vocab, rev_shared_vocab,
word_level_source=True, word_level_target=True):
"""Greedy decoder recursively predicting one token at a time"""
# Initialize the list of input token ids with the source sequence
source_tokens = tokenize(source_sequence, word_level=word_level_source)
input_ids = [shared_vocab.get(t, UNK) for t in reversed(source_tokens)]
input_ids += [shared_vocab[GO]]
# Prepare a fixed size numpy array that matches the expected input
# shape for the model
input_array = np.empty(shape=(1, model.input_shape[1]),
dtype=np.int32)
decoded_tokens = []
while len(input_ids) <= max_length:
# Vectorize a the list of input tokens as
# and use zeros padding.
input_array.fill(shared_vocab[PAD])
input_array[0, -len(input_ids):] = input_ids
# Predict the next output: greedy decoding with argmax
next_token_id = model(input_array)[0, -1].numpy().argmax()
# Stop decoding if the network predicts end of sentence:
if next_token_id == shared_vocab[EOS]:
break
# Otherwise use the reverse vocabulary to map the prediction
# back to the string space
decoded_tokens.append(rev_shared_vocab[next_token_id])
# Append prediction to input sequence to predict the next
input_ids.append(next_token_id)
separator = " " if word_level_target else ""
return separator.join(decoded_tokens)
phrases = [
"un",
"deux",
"trois",
"onze",
"quinze",
"cent trente deux",
"cent mille douze",
"sept mille huit cent cinquante neuf",
"vingt et un",
"vingt quatre",
"quatre vingts",
"quatre vingt onze mille",
"quatre vingt onze mille deux cent deux",
]
for phrase in phrases:
translation = greedy_translate(simple_seq2seq, phrase,
shared_vocab, rev_shared_vocab,
word_level_target=False)
print(phrase.ljust(40), translation)
###Output
_____no_output_____
###Markdown
The results are far from perfect but we can see that the network has already picked up some translation skills.Why does the partially trained network is able to give the correct translation for:`"sept mille huit cent cinquante neuf"`but not for:`"cent mille douze"` ?The answer is as following:- it is rather easy for the network to learn a mapping between symbols (first case), by dismissing `"cent"` and `"mille"`;- outputing the right amount of symbols, especially `0s` for `"cent mille douze"` requires more reasoning and ability to count. Let's have a look at generalization out of correct French and see if the network would generalize like a French speaker would do:
###Code
phrases = [
"quatre vingt et un",
"quarante douze",
"onze mille soixante vingt sept",
"deux mille soixante vingt quatorze",
]
for phrase in phrases:
translation = greedy_translate(simple_seq2seq, phrase,
shared_vocab, rev_shared_vocab,
word_level_target=False)
print(phrase.ljust(40), translation)
###Output
_____no_output_____
###Markdown
Model evaluationBecause **we expect only one correct translation** for a given source sequence, we can use **phrase-level accuracy** as a metric to quantify our model quality.Note that **this is not the case for real translation models** (e.g. from French to English on arbitrary sentences). Evaluation of a machine translation model is tricky in general. Automated evaluation can somehow be done at the corpus level with the [BLEU score](https://en.wikipedia.org/wiki/BLEU) (bilingual evaluation understudy) given a large enough sample of correct translations provided by certified translators but its only a noisy proxy.The only good evaluation is to give a large enough sample of the model predictions on some test sentences to certified translators and ask them to give an evaluation (e.g. a score between 0 and 6, 0 for non-sensical and 6 for the hypothetical perfect translation). However in practice this is very costly to do.Fortunately we can just use phrase-level accuracy on a our very domain specific toy problem:
###Code
def phrase_accuracy(model, num_sequences, fr_sequences, n_samples=300,
decoder_func=greedy_translate):
correct = []
n_samples = len(num_sequences) if n_samples is None else n_samples
for i, num_seq, fr_seq in zip(range(n_samples), num_sequences, fr_sequences):
if i % 100 == 0:
print("Decoding %d/%d" % (i, n_samples))
predicted_seq = decoder_func(simple_seq2seq, fr_seq,
shared_vocab, rev_shared_vocab,
word_level_target=False)
correct.append(num_seq == predicted_seq)
return np.mean(correct)
print("Phrase-level train accuracy: %0.3f"
% phrase_accuracy(simple_seq2seq, num_train, fr_train))
print("Phrase-level test accuracy: %0.3f"
% phrase_accuracy(simple_seq2seq, num_test, fr_test))
###Output
_____no_output_____
###Markdown
Bonus: Decoding with a Beam SearchInstead of decoding with greedy strategy that only considers the most-likely next token at each prediction, we can hold a priority queue of the most promising top-n sequences ordered by loglikelihoods.This could potentially improve the final accuracy of an imperfect model: indeed it can be the case that the most likely sequence (based on the conditional proability estimated by the model) starts with a character that is not the most likely alone.**Bonus Exercise:**- build a beam_translate function which decodes candidate translations with a beam search strategy- use a list of candidates, tracking `beam_size` candidates and their corresponding likelihood- compute predictions for the next outputs by using predict with a batch of the size of the beam- be careful to stop appending results if EOS symbols have been found for each candidate!
###Code
def beam_translate(model, source_sequence, shared_vocab, rev_shared_vocab,
word_level_source=True, word_level_target=True,
beam_size=10, return_ll=False):
"""Decode candidate translations with a beam search strategy
If return_ll is False, only the best candidate string is returned.
If return_ll is True, all the candidate strings and their loglikelihoods
are returned.
"""
# %load solutions/beam_search.py
def beam_translate(model, source_sequence, shared_vocab, rev_shared_vocab,
word_level_source=True, word_level_target=True,
beam_size=10, return_ll=False):
"""Decode candidate translations with a beam search strategy
If return_ll is False, only the best candidate string is returned.
If return_ll is True, all the candidate strings and their loglikelihoods
are returned.
"""
# Initialize the list of input token ids with the source sequence
source_tokens = tokenize(source_sequence, word_level=word_level_source)
input_ids = [shared_vocab.get(t, UNK) for t in source_tokens[::-1]]
input_ids += [shared_vocab[GO]]
# initialize loglikelihood, input token ids, decoded tokens for
# each candidate in the beam
candidates = [(0, input_ids[:], [], False)]
# Prepare a fixed size numpy array that matches the expected input
# shape for the model
input_array = np.empty(shape=(beam_size, model.input_shape[1]),
dtype=np.int32)
while any([not done and (len(input_ids) < max_length)
for _, input_ids, _, done in candidates]):
# Vectorize a the list of input tokens and use zeros padding.
input_array.fill(shared_vocab[PAD])
for i, (_, input_ids, _, done) in enumerate(candidates):
if not done:
input_array[i, -len(input_ids):] = input_ids
# Predict the next output in a single call to the model to amortize
# the overhead and benefit from vector data parallelism on GPU.
next_likelihood_batch = model(input_array).numpy()
# Build the new candidates list by summing the loglikelood of the
# next token with their parents for each new possible expansion.
new_candidates = []
for i, (ll, input_ids, decoded, done) in enumerate(candidates):
if done:
new_candidates.append((ll, input_ids, decoded, done))
else:
next_loglikelihoods = np.log(next_likelihood_batch[i, -1])
for next_token_id, next_ll in enumerate(next_loglikelihoods):
new_ll = ll + next_ll
new_input_ids = input_ids[:]
new_input_ids.append(next_token_id)
new_decoded = decoded[:]
new_done = done
if next_token_id == shared_vocab[EOS]:
new_done = True
if not new_done:
new_decoded.append(rev_shared_vocab[next_token_id])
new_candidates.append(
(new_ll, new_input_ids, new_decoded, new_done))
# Only keep a beam of the most promising candidates
new_candidates.sort(reverse=True)
candidates = new_candidates[:beam_size]
separator = " " if word_level_target else ""
if return_ll:
return [(separator.join(decoded), ll) for ll, _, decoded, _ in candidates]
else:
_, _, decoded, done = candidates[0]
return separator.join(decoded)
candidates = beam_translate(simple_seq2seq, "cent mille un",
shared_vocab, rev_shared_vocab,
word_level_target=False,
return_ll=True, beam_size=10)
candidates
candidates = beam_translate(simple_seq2seq, "quatre vingts",
shared_vocab, rev_shared_vocab,
word_level_target=False,
return_ll=True, beam_size=10)
candidates
###Output
_____no_output_____
###Markdown
Model Accuracy with Beam Search Decoding
###Code
print("Phrase-level test accuracy: %0.3f"
% phrase_accuracy(simple_seq2seq, num_test, fr_test,
decoder_func=beam_translate))
print("Phrase-level train accuracy: %0.3f"
% phrase_accuracy(simple_seq2seq, num_train, fr_train,
decoder_func=beam_translate))
###Output
_____no_output_____
###Markdown
Translation of Numeric Phrases with Seq2SeqIn the following we will try to build a **translation model from french phrases describing numbers** to the corresponding **numeric representation** (base 10).This is a toy machine translation task with a **restricted vocabulary** and a **single valid translation for each source phrase** which makes it more tractable to train on a laptop computer and easier to evaluate. Despite those limitations we expect that this task will highlight interesting properties of Seq2Seq models including:- the ability to **deal with different length** of the source and target sequences,- handling token with a **meaning that changes depending on the context** (e.g "quatre" vs "quatre vingts" in "quatre cents"),- basic counting and "reasoning" capabilities of LSTM and GRU models.The parallel text data is generated from a "ground-truth" Python function named `to_french_phrase` that captures common rules. Hyphenation was intentionally omitted to make the phrases more ambiguous and therefore make the translation problem slightly harder to solve (and also because Olivier had no particular interest hyphenation in properly implementing rules :).
###Code
from french_numbers import to_french_phrase
for x in [21, 80, 81, 300, 213, 1100, 1201, 301000, 80080]:
print(str(x).rjust(6), to_french_phrase(x))
###Output
_____no_output_____
###Markdown
Generating a Training SetThe following will **generate phrases 20000 example phrases for numbers between 1 and 1,000,000** (excluded). We chose to over-represent small numbers by generating all the possible short sequences between `1` and `exhaustive=5000`.We then split the generated set into non-overlapping train, validation and test splits.
###Code
from french_numbers import generate_translations
from sklearn.model_selection import train_test_split
numbers, french_numbers = generate_translations(
low=1, high=int(1e6) - 1, exhaustive=5000, random_seed=0)
num_train, num_dev, fr_train, fr_dev = train_test_split(
numbers, french_numbers, test_size=0.5, random_state=0)
num_val, num_test, fr_val, fr_test = train_test_split(
num_dev, fr_dev, test_size=0.5, random_state=0)
len(fr_train), len(fr_val), len(fr_test)
for i, fr_phrase, num_phrase in zip(range(5), fr_train, num_train):
print(num_phrase.rjust(6), fr_phrase)
for i, fr_phrase, num_phrase in zip(range(5), fr_val, num_val):
print(num_phrase.rjust(6), fr_phrase)
###Output
_____no_output_____
###Markdown
VocabulariesBuild the vocabularies from the training set only to get a chance to have some out-of-vocabulary words in the validation and test sets.First we need to introduce specific symbols that will be used to:- pad sequences- mark the beginning of translation- mark the end of translation- be used as a placehold for out-of-vocabulary symbols (not seen in the training set).Here we use the same convention as the [tensorflow seq2seq tutorial](https://www.tensorflow.org/tutorials/seq2seq):
###Code
PAD, GO, EOS, UNK = START_VOCAB = ['_PAD', '_GO', '_EOS', '_UNK']
###Output
_____no_output_____
###Markdown
To build the vocabulary we need to tokenize the sequences of symbols. For the digital number representation we use character level tokenization while whitespace-based word level tokenization will do for the French phrases:
###Code
def tokenize(sentence, word_level=True):
if word_level:
return sentence.split()
else:
return [sentence[i:i + 1] for i in range(len(sentence))]
tokenize('1234', word_level=False)
tokenize('mille deux cent trente quatre', word_level=True)
###Output
_____no_output_____
###Markdown
Let's now use this tokenization strategy to assign a unique integer token id to each possible token string found the traing set in each language ('French' and 'numeric'):
###Code
def build_vocabulary(tokenized_sequences):
rev_vocabulary = START_VOCAB[:]
unique_tokens = set()
for tokens in tokenized_sequences:
unique_tokens.update(tokens)
rev_vocabulary += sorted(unique_tokens)
vocabulary = {}
for i, token in enumerate(rev_vocabulary):
vocabulary[token] = i
return vocabulary, rev_vocabulary
tokenized_fr_train = [tokenize(s, word_level=True) for s in fr_train]
tokenized_num_train = [tokenize(s, word_level=False) for s in num_train]
fr_vocab, rev_fr_vocab = build_vocabulary(tokenized_fr_train)
num_vocab, rev_num_vocab = build_vocabulary(tokenized_num_train)
###Output
_____no_output_____
###Markdown
The two languages do not have the same vocabulary sizes:
###Code
len(fr_vocab)
len(num_vocab)
for k, v in sorted(fr_vocab.items())[:10]:
print(k.rjust(10), v)
print('...')
for k, v in sorted(num_vocab.items()):
print(k.rjust(10), v)
###Output
_____no_output_____
###Markdown
We also built the reverse mappings from token ids to token string representations:
###Code
print(rev_fr_vocab)
print(rev_num_vocab)
###Output
_____no_output_____
###Markdown
Seq2Seq with a single GRU architectureFrom: [Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. "Sequence to sequence learning with neural networks." NIPS 2014](https://arxiv.org/abs/1409.3215)For a given source sequence - target sequence pair, we will:- tokenize the source and target sequences;- reverse the order of the source sequence;- build the input sequence by concatenating the reversed source sequence and the target sequence in original order using the `_GO` token as a delimiter, - build the output sequence by appending the `_EOS` token to the source sequence.Let's do this as a function using the original string representations for the tokens so as to make it easier to debug: **Exercise**- Write a function that turns a pair of tokenized (source, target) sequences into a pair of (input, output) sequences as described above.- The function should have a `reverse_source=True` as an option.Notes: - The function should output two sequences of string tokens: one to be fed as the input and the other as expected output for the seq2seq network.- Do not pad the sequences: we will handle the padding later.- Don't forget to insert the `_GO` and `_EOS` special symbols at the right locations.
###Code
def make_input_output(source_tokens, target_tokens, reverse_source=True):
# TOTO
return input_tokens, output_tokens
# %load solutions/make_input_output.py
input_tokens, output_tokens = make_input_output(
['cent', 'vingt', 'et', 'un'],
['1', '2', '1'],
)
# Expected outputs:
# ['un', 'et', 'vingt', 'cent', '_GO', '1', '2', '1']
# ['1', '2', '1', '_EOS']
input_tokens
output_tokens
###Output
_____no_output_____
###Markdown
Vectorization of the parallel corpusLet's apply the previous transformation to each pair of (source, target) sequene and use a shared vocabulary to store the results in numpy arrays of integer token ids, with padding on the left so that all input / output sequences have the same length:
###Code
all_tokenized_sequences = tokenized_fr_train + tokenized_num_train
shared_vocab, rev_shared_vocab = build_vocabulary(all_tokenized_sequences)
max(len(s) for s in tokenized_fr_train)
max(len(s) for s in tokenized_num_train)
import numpy as np
max_length = 20 # found by introspection of our training set
def vectorize_corpus(source_sequences, target_sequences, shared_vocab,
word_level_source=True, word_level_target=True,
max_length=max_length):
assert len(source_sequences) == len(target_sequences)
n_sequences = len(source_sequences)
source_ids = np.empty(shape=(n_sequences, max_length), dtype=np.int32)
source_ids.fill(shared_vocab[PAD])
target_ids = np.empty(shape=(n_sequences, max_length), dtype=np.int32)
target_ids.fill(shared_vocab[PAD])
numbered_pairs = zip(range(n_sequences), source_sequences, target_sequences)
for i, source_seq, target_seq in numbered_pairs:
source_tokens = tokenize(source_seq, word_level=word_level_source)
target_tokens = tokenize(target_seq, word_level=word_level_target)
in_tokens, out_tokens = make_input_output(source_tokens, target_tokens)
in_token_ids = [shared_vocab.get(t, UNK) for t in in_tokens]
source_ids[i, -len(in_token_ids):] = in_token_ids
out_token_ids = [shared_vocab.get(t, UNK) for t in out_tokens]
target_ids[i, -len(out_token_ids):] = out_token_ids
return source_ids, target_ids
X_train, Y_train = vectorize_corpus(fr_train, num_train, shared_vocab,
word_level_target=False)
X_train.shape
X_train[0]
Y_train.shape
fr_train[0]
num_train[0]
X_train[0]
Y_train[0]
###Output
_____no_output_____
###Markdown
This looks good. In particular we can note:- the PAD=0 symbol at the beginning of the two sequences,- the input sequence has the GO=1 symbol to separate the source from the target,- the output sequence is a shifted version of the target and ends with EOS=2.Let's vectorize the validation and test set to be able to evaluate our models:
###Code
X_val, Y_val = vectorize_corpus(fr_val, num_val, shared_vocab,
word_level_target=False)
X_test, Y_test = vectorize_corpus(fr_test, num_test, shared_vocab,
word_level_target=False)
X_val.shape, Y_val.shape
X_test.shape, Y_test.shape
###Output
_____no_output_____
###Markdown
A simple homogeneous Seq2Seq architectureTo keep the architecture simple we will use the **same RNN model and weights for both the encoder part** (before the `_GO` token) **and the decoder part** (after the `_GO` token).We may GRU recurrent cell instead of LSTM because it is slightly faster to compute and should give comparable results.**Exercise:**- Build a Seq2Seq model: - Start with an Embedding layer; - Add a single GRU layer: the GRU layer should yield a sequence of output vectors, one at each timestep; - Add a Dense layer to adapt the ouput dimension of the GRU layer to the dimension of the output vocabulary; - Don't forget to insert some Dropout layer(s), especially after the Embedding layer.Note:- The output dimension of the Embedding layer should be smaller than usual be cause we have small vocabulary size;- The dimension of the GRU should be larger to give the Seq2Seq model enough "working memory" to memorize the full input sequence before decoding it;- Your model should output a shape `[batch, sequence_length, vocab_size]`.
###Code
from keras.models import Sequential
from keras.layers import Embedding, Dropout, GRU, Dense
vocab_size = len(shared_vocab)
simple_seq2seq = Sequential()
# TODO
# Here we use the sparse_categorical_crossentropy loss to be able to pass
# integer-coded output for the token ids without having to convert to one-hot
# codes
simple_seq2seq.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
# %load solutions/simple_seq2seq.py
###Output
_____no_output_____
###Markdown
**Questions**:- What is the expected shape of the output of the model when fed with input of length 20 tokens? What is the meaning of the values in the output of the model?- What is the shape of the output of each layer in the model?
###Code
# simple_seq2seq.predict(X_train[0:1]).shape
# simple_seq2seq.summary()
###Output
_____no_output_____
###Markdown
Let's register a callback mechanism to automatically snapshot the best model by measure the performance of the model on the validation set at the end of each epoch during training:
###Code
from keras.callbacks import ModelCheckpoint
from keras.models import load_model
best_model_fname = "simple_seq2seq_checkpoint.h5"
best_model_cb = ModelCheckpoint(best_model_fname, monitor='val_loss',
save_best_only=True, verbose=1)
###Output
_____no_output_____
###Markdown
We need to use np.expand_dims trick on Y: this is required by Keras because of we use a sparse (integer-based) representation for the output:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
history = simple_seq2seq.fit(X_train, np.expand_dims(Y_train, -1),
validation_data=(X_val, np.expand_dims(Y_val, -1)),
epochs=15, verbose=2, batch_size=32,
callbacks=[best_model_cb])
plt.figure(figsize=(12, 6))
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], '--', label='validation')
plt.ylabel('negative log likelihood')
plt.xlabel('epoch')
plt.legend()
plt.title('Convergence plot for Simple Seq2Seq')
###Output
_____no_output_____
###Markdown
Let's load the best model found on the validation set at the end of training:
###Code
simple_seq2seq = load_model(best_model_fname)
###Output
_____no_output_____
###Markdown
If you don't have access to a GPU and cannot wait 10 minutes to for the model to converge to a reasonably good state, feel to use the pretrained model. This model has been obtained by training the above model for ~150 epochs. The validation loss is significantly lower than 1e-5. In practice it should hardly ever make any prediction error on this easy translation problem.Alternatively we will load this imperfect model (trained only to 50 epochs) with a validation loss of ~7e-4. This model makes funny translation errors so I would suggest to try it first:
###Code
from keras.utils.data_utils import get_file
filename = get_file(
"simple_seq2seq_partially_pretrained.h5",
"https://github.com/m2dsupsdlclass/lectures-labs/releases/"
"download/0.4/simple_seq2seq_partially_pretrained.h5"
)
# Uncomment the following to replace for the fully trained network:
# filename= get_file(
# "simple_seq2seq_pretrained.h5",
# "https://github.com/m2dsupsdlclass/lectures-labs/releases/"
# "download/0.4/simple_seq2seq_pretrained.h5")
simple_seq2seq.load_weights(filename)
###Output
_____no_output_____
###Markdown
Let's have a look at a raw prediction on the first sample of the test set:
###Code
fr_test[0]
###Output
_____no_output_____
###Markdown
In numeric array this is provided (along with the expected target sequence) as the following padded input sequence:
###Code
first_test_sequence = X_test[0:1]
first_test_sequence
###Output
_____no_output_____
###Markdown
Remember that the `_GO` (symbol indexed at `1`) separates the reversed source from the expected target sequence:
###Code
rev_shared_vocab[1]
###Output
_____no_output_____
###Markdown
Interpreting the model prediction**Exercise**:- Feed this test sequence into the model. What is the shape of the output?- Get the argmax of each output prediction to get the most likely symbols- Dismiss the padding / end of sentence- Convert to readable vocabulary using `rev_shared_vocab`*Interpretation*- Compare the output with the first example in numerical format `num_test[0]`- What do you think of this way of decoding? Is it correct to use it at inference time?
###Code
# %load solutions/interpret_output.py
###Output
_____no_output_____
###Markdown
In the previous exercise we cheated because we gave the complete sequence along with the solution in the input sequence.To be more realistic we need to use the model in a setting where we do not provide any token of expected translation as part of the input sequence: the model shall predict one token at a time starting only from the source sequence along with the `` special symbol. At each step, we append the new predicted output token in the input sequence to predict the next token:
###Code
def greedy_translate(model, source_sequence, shared_vocab, rev_shared_vocab,
word_level_source=True, word_level_target=True):
"""Greedy decoder recursively predicting one token at a time"""
# Initialize the list of input token ids with the source sequence
source_tokens = tokenize(source_sequence, word_level=word_level_source)
input_ids = [shared_vocab.get(t, UNK) for t in reversed(source_tokens)]
input_ids += [shared_vocab[GO]]
# Prepare a fixed size numpy array that matches the expected input
# shape for the model
input_array = np.empty(shape=(1, model.input_shape[1]),
dtype=np.int32)
decoded_tokens = []
while len(input_ids) <= max_length:
# Vectorize a the list of input tokens as
# and use zeros padding.
input_array.fill(shared_vocab[PAD])
input_array[0, -len(input_ids):] = input_ids
# Predict the next output: greedy decoding with argmax
next_token_id = model.predict(input_array)[0, -1].argmax()
# Stop decoding if the network predicts end of sentence:
if next_token_id == shared_vocab[EOS]:
break
# Otherwise use the reverse vocabulary to map the prediction
# back to the string space
decoded_tokens.append(rev_shared_vocab[next_token_id])
# Append prediction to input sequence to predict the next
input_ids.append(next_token_id)
separator = " " if word_level_target else ""
return separator.join(decoded_tokens)
phrases = [
"un",
"deux",
"trois",
"onze",
"quinze",
"cent trente deux",
"cent mille douze",
"sept mille huit cent cinquante neuf",
"vingt et un",
"vingt quatre",
"quatre vingts",
"quatre vingt onze mille",
"quatre vingt onze mille deux cent deux",
]
for phrase in phrases:
translation = greedy_translate(simple_seq2seq, phrase,
shared_vocab, rev_shared_vocab,
word_level_target=False)
print(phrase.ljust(40), translation)
###Output
_____no_output_____
###Markdown
The results are far from perfect but we can see that the network has already picked up some translation skills.Why does the partially trained network is able to give the correct translation for:`"sept mille huit cent cinquante neuf"`but not for:`"cent mille douze"` ?The answer is as following:- it is rather easy for the network to learn a mapping between symbols (first case), by dismissing `"cent"` and `"mille"`;- outputing the right amount of symbols, especially `0s` for `"cent mille douze"` requires more reasoning and ability to count. Let's have a look at generalization out of correct French and see if the network would generalize like a French speaker would do:
###Code
phrases = [
"quatre vingt et un",
"quarante douze",
"onze mille soixante vingt sept",
"deux mille soixante vingt quatorze",
]
for phrase in phrases:
translation = greedy_translate(simple_seq2seq, phrase,
shared_vocab, rev_shared_vocab,
word_level_target=False)
print(phrase.ljust(40), translation)
###Output
_____no_output_____
###Markdown
Model evaluationBecause **we expect only one correct translation** for a given source sequence, we can use **phrase-level accuracy** as a metric to quantify our model quality.Note that **this is not the case for real translation models** (e.g. from French to English on arbitrary sentences). Evaluation of a machine translation model is tricky in general. Automated evaluation can somehow be done at the corpus level with the [BLEU score](https://en.wikipedia.org/wiki/BLEU) (bilingual evaluation understudy) given a large enough sample of correct translations provided by certified translators but its only a noisy proxy.The only good evaluation is to give a large enough sample of the model predictions on some test sentences to certified translators and ask them to give an evaluation (e.g. a score between 0 and 6, 0 for non-sensical and 6 for the hypothetical perfect translation). However in practice this is very costly to do.Fortunately we can just use phrase-level accuracy on a our very domain specific toy problem:
###Code
def phrase_accuracy(model, num_sequences, fr_sequences, n_samples=300,
decoder_func=greedy_translate):
correct = []
n_samples = len(num_sequences) if n_samples is None else n_samples
for i, num_seq, fr_seq in zip(range(n_samples), num_sequences, fr_sequences):
if i % 100 == 0:
print("Decoding %d/%d" % (i, n_samples))
predicted_seq = decoder_func(simple_seq2seq, fr_seq,
shared_vocab, rev_shared_vocab,
word_level_target=False)
correct.append(num_seq == predicted_seq)
return np.mean(correct)
print("Phrase-level test accuracy: %0.3f"
% phrase_accuracy(simple_seq2seq, num_test, fr_test))
print("Phrase-level train accuracy: %0.3f"
% phrase_accuracy(simple_seq2seq, num_train, fr_train))
###Output
_____no_output_____
###Markdown
Bonus: Decoding with a Beam SearchInstead of decoding with greedy strategy that only considers the most-likely next token at each prediction, we can hold a priority queue of the most promising top-n sequences ordered by loglikelihoods.This could potentially improve the final accuracy of an imperfect model: indeed it can be the case that the most likely sequence (based on the conditional proability estimated by the model) starts with a character that is not the most likely alone.**Bonus Exercise:**- build a beam_translate function which decodes candidate translations with a beam search strategy- use a list of candidates, tracking `beam_size` candidates and their corresponding likelihood- compute predictions for the next outputs by using predict with a batch of the size of the beam- be careful to stop appending results if EOS symbols have been found for each candidate!
###Code
def beam_translate(model, source_sequence, shared_vocab, rev_shared_vocab,
word_level_source=True, word_level_target=True,
beam_size=10, return_ll=False):
"""Decode candidate translations with a beam search strategy
If return_ll is False, only the best candidate string is returned.
If return_ll is True, all the candidate strings and their loglikelihoods
are returned.
"""
# %load solutions/beam_search.py
candidates = beam_translate(simple_seq2seq, "cent mille un",
shared_vocab, rev_shared_vocab,
word_level_target=False,
return_ll=True, beam_size=10)
candidates
candidates = beam_translate(simple_seq2seq, "quatre vingts",
shared_vocab, rev_shared_vocab,
word_level_target=False,
return_ll=True, beam_size=10)
candidates
###Output
_____no_output_____
###Markdown
Model Accuracy with Beam Search Decoding
###Code
print("Phrase-level test accuracy: %0.3f"
% phrase_accuracy(simple_seq2seq, num_test, fr_test,
decoder_func=beam_translate))
print("Phrase-level train accuracy: %0.3f"
% phrase_accuracy(simple_seq2seq, num_train, fr_train,
decoder_func=beam_translate))
###Output
_____no_output_____
|
.ipynb_checkpoints/OpSeF_IV_Reimport_001-checkpoint.ipynb
|
###Markdown
Requirements:Tested with opsef003.yml (see attached file)opsef002 + n2v = opsef003on a GeForce RTX 2080 with 8GB RAMon ubuntu/18.04.3
###Code
# based on OpSeF_IV_Run_002_dev
###Output
_____no_output_____
###Markdown
adaped from:https://github.com/MouseLand/cellposehttps://github.com/CellProfiler/CellProfilerhttps://github.com/mpicbg-csbd/stardisthttps://github.com/scikit-image/scikit-imagehttps://github.com/VolkerH/unet-nuclei/Thanks to:All developer of the above mentioned repositories.
###Code
# basic libs
import os
import sys
import time
import datetime
import inspect
from glob import glob
import tifffile as tif
import cv2 as cv
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import collections
import math
import pickle
import networkx as nx
%matplotlib inline
# skimage
import skimage
from skimage import transform, io, filters, measure, morphology,img_as_float
from skimage.color import label2rgb,gray2rgb
from skimage.filters import gaussian, rank, threshold_otsu
from skimage.io import imread, imsave
from skimage.measure import label, regionprops, regionprops_table
from skimage.morphology import disk, watershed
# scipy
from scipy.signal import medfilt
from scipy.ndimage import generate_binary_structure, binary_dilation
# for cluster analysis
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn.cluster import AgglomerativeClustering
main_folder = os.path.dirname(os.path.abspath(inspect.stack()[0][1]))
import_path = os.path.join(main_folder,"Utils_and_Configs")
if import_path not in sys.path:
sys.path.append(import_path)
# import from import_path
from Tools_002 import *
from UNet_CP01 import *
from Segmentation_Func_06 import *
from Pre_Post_Process002 import *
from N2V_DataGeneratorTR001 import *
from opsef_core_002 import *
###Output
Using TensorFlow backend.
/home/trasse/anaconda3/envs/opsef003/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/trasse/anaconda3/envs/opsef003/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/trasse/anaconda3/envs/opsef003/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/trasse/anaconda3/envs/opsef003/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/trasse/anaconda3/envs/opsef003/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/trasse/anaconda3/envs/opsef003/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
###Markdown
Functions
###Code
def splitpath(path, maxdepth=20):
'''Splits a path in all its parts'''
( head, tail ) = os.path.split(path)
return splitpath(head, maxdepth - 1) + [ tail ] \
if maxdepth and head and head != path \
else [ head or tail ]
def import_mask_to_img_dics(fp):
''' Imports filepair list as dic of dics'''
# read content
with open(pair_dic_fn) as f:
content = f.readlines()
# remove whitespace characters like `\n` at the end of each line
content = [x.strip() for x in content]
# make target dic
mask_to_img_dic_dic = {}
# create static part of filepath
path_as_list = splitpath(fp)
folder_base = os.path.join(*path_as_list[:-2])
# convert content
for seg_mask_id in range(1,len(content[0].split(";"))):
mask_to_img_dic = {}
for line in content:
mylist = line.split(";")
mask_to_img_dic[folder_base+mylist[0]] = folder_base + mylist[seg_mask_id]
mask_to_img_dic_dic[seg_mask_id] = mask_to_img_dic
return mask_to_img_dic_dic
###Output
_____no_output_____
###Markdown
Main Load parameterthe parameter for processing need to be defined in the notebook.Opsef_Setup_000Xthis notebook will print in the end a file_path.Please cut and paste it below!
###Code
# load the info on the original segmentation
file_path = "/mnt/ag-microscopy/SampleDataML/EpiCells_Reimport/Parameter_SDB2018Cellsinv_Run_005.pkl"
infile = open(file_path,'rb')
parameter = pickle.load(infile)
print("Loading processing pipeline from",file_path)
infile.close()
pc,input_def,run_def,initModelSettings = parameter
# load the info on the files to be imported
pair_dic_fn = "/mnt/ag-microscopy/SampleDataML/EpiCells_Reimport/Processed_005/10_ImportExport/Fiji_FilePairList_SDB2018_EpiCells_005.txt"
mask_master_dic = import_mask_to_img_dics(pair_dic_fn)
mask_master_dic
def results_to_csv(mask_to_img_dic,get_property,root,sub_f,run_ID,output_folder,tag,subset):
'''
Here the results are extracted and saved as csv file.
The naming scheme of the folder Basic_Quantification is as follows:
Combined_Object_$Data_Analysis_ID_$Search_Term_you_used_to_filter_results.csv
Contains all combined results per object.
Results_$Mask_Filename_$Intensity_Image_filename.csv
Contains results per object for the defined pair of images.
Combined_Object_Data_Analysis_ID_$Data_Analysis_ID_$Search_Term_you_used_to_filter_results.csv
Contains all post-processed results per image (e.g. cell number, mean intensity, ect.)
'''
stats_per_folder = []
count = 0
for key,value in mask_to_img_dic.items():
stats_per_img = {}
# load images
ma = tif.imread(key)
im = tif.imread(value)
# get results per object
results = skimage.measure.regionprops_table(ma, intensity_image=im, properties=get_property, cache=True)
results_df = pd.DataFrame.from_records(results)
results_df["Mask_Image"] = os.path.split(key)[1]
results_df["Intensity_Image"] = os.path.split(value)[1]
results_df["sum_intensity"] = results_df["mean_intensity"] * results_df["area"]
results_df["circularity"] = results_df["equivalent_diameter"] * math.pi / results_df["perimeter"]
new_order = ["Mask_Image","label","area","centroid-0","centroid-1"] + get_property[4:] + ["sum_intensity","circularity","Intensity_Image"]
results_df = results_df.reindex(columns = new_order)
# to avoid confusion for Fiji user
results_df.rename(columns={'centroid-0':'centroid-0_Fiji_Y'}, inplace=True)
results_df.rename(columns={'centroid-1':'centroid-1_Fiji_X'}, inplace=True)
new_fn = "Results_{}_{}.csv".format(os.path.split(key)[1],os.path.split(value)[1])
new_fp = os.path.join(root,"Processed_{}".format(run_ID),sub_f[output_folder],new_fn)
results_df.to_csv(new_fp, sep=';', decimal=',')
# get results per image
stats_per_img["Mask"] = os.path.split(key)[1]
stats_per_img["Intensity_Image"] = os.path.split(value)[1]
stats_per_img["count"] = results_df.shape[0]
stats_per_img["median_area"] = results_df["area"].median()
stats_per_img["mean_area"] = results_df["area"].mean()
stats_per_img["mean_intensity"] = results_df["mean_intensity"].mean()
stats_per_img["median_circularity"] = results_df["circularity"].median()
stats_per_img["median_sum_intensity"] = results_df["sum_intensity"].median()
stats_per_folder.append(stats_per_img)
if count > 0:
all_data = pd.concat([all_data,results_df])
else:
all_data = results_df
count += 1
# save combined object data
new_fn = "Combined_Object_Data_{}_{}.csv".format("_".join(subset),tag)
new_fp = os.path.join(root,"Processed_{}".format(run_ID),sub_f[output_folder],new_fn)
all_data.to_csv(new_fp, sep=';', decimal=',')
# save summary data
all_results_df = pd.DataFrame.from_records(stats_per_folder)
new_fn = "Summary_Results_{}_{}.csv".format("_".join(subset),tag)
new_fp = os.path.join(root,"Processed_{}".format(run_ID),sub_f[output_folder],new_fn)
all_results_df.to_csv(new_fp, sep=';', decimal=',')
return all_data
###Output
_____no_output_____
###Markdown
Export annditional channel & Quantify Results
###Code
pc = {}
# define here what you want to do
pc["Export_to_CSV"] = True
if pc["Export_to_CSV"]:
all_combined = [] # used for quantifications of more than one intensity channel
# get a list of the masks that were produced by segmentation
mask_files = glob(os.path.join(input_def["root"],"Processed_{}".format(run_def["run_ID"]),pc["sub_f"][2])+"/*.tif")
mask_to_img_dic, mask_to_8bitimg_dic = make_mask_to_img_dic(mask_files,pc,input_def,run_def,0,pc["Intensity_Ch"])
if pc["toFiji"]:
if not pc["Export_to_CSV"]:
mask_files = glob(os.path.join(input_def["root"],"Processed_{}".format(run_def["run_ID"]),pc["sub_f"][2])+"/*.tif")
mask_to_img_dic, mask_to_8bitimg_dic = make_mask_to_img_dic(mask_files,pc,input_def,run_def,0,pc["Intensity_Ch"])
root_plus = os.path.join(input_def["root"],"Processed_{}".format(run_def["run_ID"]))
txt_fn = os.path.join(root_plus,pc["sub_f"][10],"FilePairList_{}_{}.txt".format(input_def["dataset"],run_def["run_ID"]))
with open(txt_fn,"w") as f:
for mask_fn,image_fn in mask_to_8bitimg_dic.items():
f.write("{};{}{}".format(image_fn.replace(root_plus,""),mask_fn.replace(root_plus,""),"\n"))
f.close()
# export additional channel
if pc["export_another_channel"]:
if input_def["input_type"] == ".lif":
exported_file_list = export_second_channel_for_mask(lifobject,pc,input_def,run_def)
if input_def["input_type"] == ".tif":
exported_file_list = export_second_channel_for_mask("NoneIsTiFF",pc,input_def,run_def)
# optional in case segmentation results shall be filtered by a mask:
if pc["create_filter_mask_from_channel"]:
# create new masks (by thresolding the additional input) and extract their names
new_mask_fn_list = create_mask_from_add_ch(exported_file_list,input_def["root"],pc["sub_f"],run_def["run_ID"],run_def["para_mp"],run_def)
# make a dic that has the segmentation output mask name as key, the name of the threshold mask as value
if input_def["input_type"] == ".lif":
pair_dic = make_pair_second_mask_simple(mask_files,new_mask_fn_list)
if input_def["input_type"] == ".tif":
core_match = [8,10] # use to define how to match filenames
# for documentation see: how_to_define_core_match.txt
# integrate this variable in OpSeF_Setup!!!
pair_dic = make_pair_second_mask_tiff(mask_files,new_mask_fn_list,core_match)
# create new seqmentation masks per class and return a list of file_names
class1_to_img_dic,class2_to_img_dic = split_by_mask(input_def["root"],run_def["run_ID"],pc["sub_f"],pair_dic,mask_to_8bitimg_dic,mask_to_img_dic)
# print(mask_files)
if pc["toFiji"]:
if pc["create_filter_mask_from_channel"]:
root_plus = os.path.join(input_def["root"],"Processed_{}".format(run_def["run_ID"]))
txt_fn = os.path.join(root_plus,pc["sub_f"][10],"FilePairList_Classes_{}_{}.txt".format(input_def["dataset"],run_def["run_ID"]))
img_to_class2_dic = dict((v,k) for k,v in class2_to_img_dic.items()) # invert dic 2
with open(txt_fn,"w") as f:
for mask_fn,image_fn in class1_to_img_dic.items():
mask2 = img_to_class2_dic[image_fn] # second seg mask
f.write("{};{};{};{}".format(image_fn.replace(root_plus,""),mask_fn.replace(root_plus,""),mask2.replace(root_plus,""),"\n"))
f.close()
###Output
_____no_output_____
###Markdown
Export results
###Code
# quantify original mask
if pc["Export_to_CSV"]:
all_combined.append(results_to_csv(mask_to_img_dic,pc["get_property"],input_def["root"],pc["sub_f"],run_def["run_ID"],4,"All_Main",input_def["subset"])) # 4 is the main result folder
if pc["plot_head_main"]:
all_combined[0].head()
if pc["create_filter_mask_from_channel"]:
# quantify class1 masks
results_to_csv(class1_to_img_dic,pc["get_property"],input_def["root"],pc["sub_f"],run_def["run_ID"],9,"Class00",input_def["post_subset"]) # 9 is the classified result folder
# quantify class2 masks
results_to_csv(class2_to_img_dic,pc["get_property"],input_def["root"],pc["sub_f"],run_def["run_ID"],9,"Class01",input_def["post_subset"]) # 9 is the classified result folder
if pc["Quantify_2ndCh"]:
mask_to_img_dic, mask_to_8bitimg_dic = make_mask_to_img_dic(mask_files,pc,input_def,run_def,5,pc["Intensity_2ndCh"])
all_combined.append(results_to_csv(mask_to_img_dic,pc["get_property"],input_def["root"],pc["sub_f"],run_def["run_ID"],4,"All_2nd",input_def["subset"]))
if pc["merge_results"]:
result_summary = merge_intensity_results(all_combined,input_def,pc["sub_f"],run_def,4)
if pc["plot_merged"]:
result_summary.head()
else:
if pc["Export_to_CSV"]:
result_summary = all_combined[0]
###Output
_____no_output_____
###Markdown
AddOn 1: Basic plotting of results
###Code
if pc["Plot_Results"]:
fig, axs = plt.subplots(len(pc["Plot_xy"]), 1, figsize=(5, 5*len(pc["Plot_xy"])), constrained_layout=True)
for i in range(0,len(pc["Plot_xy"])):
axs[i].scatter(result_summary[pc["Plot_xy"][i][0]],result_summary[pc["Plot_xy"][i][1]], c="red")
axs[i].set_title('{} vs {}'.format(*pc["Plot_xy"][i]))
axs[i].set_xlabel(pc["Plot_xy"][i][0],fontsize=15)
axs[i].set_ylabel(pc["Plot_xy"][i][1],fontsize=15)
###Output
_____no_output_____
###Markdown
AddOn 2: Do PCA and TSNE Example pipeline auto-clustering
###Code
if pc["Cluster_How"] == "Auto":
# get data for PCA / TSNE
df_for_tsne_list = extract_values_for_TSNE_PCA(input_def["root"],run_def["run_ID"],pc["sub_f"],4,pc["include_in_tsne"])
# get cluster
data = df_for_tsne_list[0].values
auto_clustering = AgglomerativeClustering(linkage=pc["link_method"], n_clusters=pc["cluster_expected"]).fit(data)
# do analysis
result_tsne = TSNE(learning_rate=pc["tSNE_learning_rate"]).fit_transform(data)
result_pca = PCA().fit_transform(data)
# display results
fig, axs = plt.subplots(2, 1, figsize=(10, 20), constrained_layout=True)
axs[0].scatter(result_tsne[:, 0], result_tsne[:, 1], c=auto_clustering.labels_)
axs[0].set_title('tSNE')
axs[1].scatter(result_pca[:, 0], result_pca[:, 1], c=auto_clustering.labels_)
axs[1].set_title('PCA')
###Output
_____no_output_____
###Markdown
Example pipeline mask-clustering
###Code
# get data for PCA / TSNE
if pc["Cluster_How"] == "Mask":
df_for_tsne_list_by_class = extract_values_for_TSNE_PCA(input_def["root"],run_def["run_ID"],pc["sub_f"],9,pc["include_in_tsne"])
fused_df = pd.concat(df_for_tsne_list_by_class,axis = 0,join="outer")
data_by_class = fused_df.values
class_def_by_mask = [0 for x in range (0,df_for_tsne_list_by_class[0].shape[0])] + [1 for x in range (0,df_for_tsne_list_by_class[1].shape[0])]
# do analysis
result_tsne_by_class = TSNE(learning_rate=pc["tSNE_learning_rate"]).fit_transform(data_by_class)
result_pca_by_class = PCA().fit_transform(data_by_class)
# display results
fig, axs = plt.subplots(2, 1, figsize=(10, 20), constrained_layout=True)
axs[0].scatter(result_tsne_by_class[:, 0], result_tsne_by_class[:, 1], c=class_def_by_mask)
axs[0].set_title('tSNE')
axs[1].scatter(result_pca_by_class[:, 0], result_pca_by_class[:, 1], c=class_def_by_mask)
axs[1].set_title('PCA')
###Output
_____no_output_____
###Markdown
Results
###Code
print("Processing completed sucessfully !\n")
print("All results have been saved in this folder: \n")
print(os.path.join(input_def["root"],"Processed_{}".format(run_def["run_ID"])))
###Output
_____no_output_____
|
material/session_6/lecture_6.ipynb
|
###Markdown
Notebook contents: This notebook contains a lecture. The code for generating plots are found at the of the notebook. Links below.- [presentation](Session-6:-Data-structuring-II)- [code for plots](Code-for-plots) Session 6: Data structuring II The Pandas way*Andreas Bjerre-Nielsen* Recap*What do we know about explanatory plotting?*- matplotlib with heavy customization (labels, colors, thickness)- start from empty canvas and add important stuff- start with customized plot (e.g. from seaborn) and remove everything unnecessary*What do we know about exploratory plotting?*- seaborn has nice plots- advanced plots for making plot grids Motivation*Reminder: Why do we want to learn data structuring?*- We have to do it, data is almost never cleaned- No one can and will do it for us- Even as a manager of data scientists - we need to know AgendaWe will learn about new data types 1. [string data](String-data)1. [temporal data](Temporal-data)1. [categorical data](Categorical-data)1. [missing data](Missing-data) and [duplicates](Duplicates) Loading the software
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
String data String operations vectorized (1)*Quiz: Which operators could work for string?* Operators `+`, `+=`. Example:
###Code
str_ser1 = pd.Series(['Andreas', 'Snorre', 'David'])
str_ser1 + ' works @ SODAS'
###Output
_____no_output_____
###Markdown
String operations vectorized (2)Addition also work for two series
###Code
# adding two series together is also possible
str_ser2 = pd.Series(['Bjerre-Nielsen', 'Ralund', 'Dreyer Lassen'])
str_ser1 + ' ' + str_ser2
###Output
_____no_output_____
###Markdown
String operations vectorized (3)The powerful .str has several powerful methods e.g. `contains`, `capitalize`. Example:
###Code
# str_ser1.str.upper()
str_ser1.str.contains('And')
###Output
_____no_output_____
###Markdown
String operations vectorized (4) The .str methods include slicing - example:
###Code
str_ser2.str[1:4]
###Output
_____no_output_____
###Markdown
String operations vectorized (5)Many more `str` methods in pandas,- most basic strings methods translate directly- see Table 7-5 in PDA for an overview Categorical data Categorical data type (1)*Are string (object) columns smart?* No, sometimes categorical data type is better:- use categorical when many characters are repeated - less storage and faster computation - or to order string data Categorical data type (2)*How do we convert to categorical?*
###Code
edu_list = ['B.Sc. Political Science', 'Secondary school'] + ['High school']*2
edu_cats = ['Secondary school', 'High school', 'B.Sc. Political Science']
str_ser3 = pd.Series(edu_list*100000)
# option 1 - order
cats = pd.Categorical(str_ser3, categories=edu_cats, ordered=True)
cat_ser = pd.Series(cats, index=str_ser3)
cat_ser.values[:5]
# option 2 - no order
cat_ser2 = str_ser3.astype('category')
###Output
_____no_output_____
###Markdown
Categorical data type (3)*How do we work with categorical data?* - Using the `cat` attribute of series. Has a few methods. E.g. `.cat.codes`
###Code
print(cat_ser.cat.codes)
###Output
B.Sc. Political Science 2
Secondary school 0
High school 1
High school 1
dtype: int8
###Markdown
Categorical data type (4)Often we want to our string / categorical data as dummy variables- each category value has a dummy column (0 or 1)- dummy columns can be made with `to_dummies` Categorical data type (5)*Can we convert our numerical data to bins in a smart way?* Yes, two methods are useful (we already saw `cut`): - `cut` which divides data by user specified bins- `qcut` which divides data by user specified quantiles - e.g. median, $q=0.5$; lower quartile threshold, $q=0.25$.
###Code
x = pd.Series(np.random.normal(size=10**7))
cat_ser3 = pd.qcut(x, q=[0,.025,.975,1])
cat_ser3.cat.categories
###Output
_____no_output_____
###Markdown
Temporal data Temporal data type (1)*Why is time so fundamental?* Every measurement made by humans was made at a point in time, therefore it has a "timestamp". Temporal data type (2)*How are timestamps measured?* 1. **Datetime** (ISO 8601): standard calendar - year, month, day: minute, second, miliseconds etc. [timezone] - can come as string in raw data2. **Epoch time**: seconds since January 1, 1970 - 00:00, GMT. - nanoseconds in pandas Temporal data type (3)*Does Pandas store it in a smart way?* Pandas has native support for temporal data combining datetime and epoch time.
###Code
str_ser4 = pd.Series(['20170101', '20170727', '20170803', '20171224'])
dt_ser1 = pd.to_datetime(str_ser4)
print(dt_ser1.astype('int64'))
###Output
0 1483228800000000000
1 1501113600000000000
2 1501718400000000000
3 1514073600000000000
dtype: int64
###Markdown
Temporal data type (4)*How does the input type matter for how datatime is parsed?*
###Code
# print(pd.to_datetime(['20170101', '20170102']))
print(pd.to_datetime([20170101, 20170102]))
###Output
DatetimeIndex(['1970-01-01 00:00:00.020170101', '1970-01-01 00:00:00.020170102'], dtype='datetime64[ns]', freq=None)
###Markdown
Time series (1)*Why is temporal data powerful?* We can easily make and plot time series. Almost 20 years of Apple stock price- tip install in terminal using: `conda install pandas-datareader`
###Code
from pandas_datareader import data
aapl = data.DataReader("aapl", data_source='yahoo', start='2000')['Adj Close']
aapl.plot(figsize=(10,3), logy=True)
###Output
_____no_output_____
###Markdown
Time series (2)*What is within the `aapl` series? What is a time series?*
###Code
aapl.head().index
###Output
_____no_output_____
###Markdown
Time series is normally data with time index. Time series (3)*Why is pandas good at time data?* It handles irregular data well: - missing values;- duplicate entries. It has specific tools for resampling and interpolating data- See 11.3, 11.5, 11.6 in PDA book. Datetime variables (1)*What other uses might time data have?* We can extract data from datetime columns. These columns have the `dt` attribute and its sub-methods. Example:
###Code
dt_ser2 = pd.Series(aapl.index)
dt_ser2.dt.day.loc[:7]
# dt_ser2.dt.day.iloc[500:505]
# dt_ser2.dt.year.head(3)
###Output
_____no_output_____
###Markdown
Datetime variables (2)The `dt` sub-methods include `year`, `weekday`, `hour`, `second`. *To note:* Your temporal data may need conversion. `dt` includes `tz_localize` and `tz_convert` which does that. Datetime variables (3)*Quiz: What are you to do if get time data with numbers of around 1-2 billion?* It is likely to be epoch time measured in seconds. We can convert it as follows:
###Code
pd.to_datetime([123512321, 1532321321], unit='s')
###Output
_____no_output_____
###Markdown
.. Missing data Missing data type (1)*Which data type have we not covered yet?* Missing data, i.e. empty observations.- In python: `None`- In pandas: numpy's 'Not a Number', abbreviated `NaN` or `nan` Missing data type (2)*What does a DataFrame with missing data look like?*
###Code
nan_data = [[1,np.nan,3],
[4,5,None],
[7,8,9]]
nan_df = pd.DataFrame(nan_data, columns=['A', 'B', "C"])
print(nan_df.isnull())
###Output
A B C
0 False True False
1 False False True
2 False False False
###Markdown
Handling missing data*What options do we in working with missing data?* 1. Ignore the problem2. Drop missing data: columns and/or rows3. Fill in the blanks4. If time and money permits: collect the data or new data Removing missing data *How do we remove data?* Using the `dropna` method.
###Code
print(nan_df)
print()
print(nan_df.dropna(axis=0,subset=['B'])) # subset=['B'], axis=1
###Output
A B C
0 1 NaN 3.0
1 4 5.0 NaN
2 7 8.0 9.0
A B C
1 4 5.0 NaN
2 7 8.0 9.0
###Markdown
Filling missing data (1)*How do we fill observations with a constant?*
###Code
# print(nan_df.fillna(100)) # fill all
# print(nan_df)
###Output
A B C
0 1 100.0 3.0
1 4 5.0 100.0
2 7 8.0 9.0
###Markdown
Note: we can also select missing `isnull` and the replace values using `loc`. Filling missing data (2)*Are there other methods?* Yes, many methods:- Filling sorted temporal data, see `ffill`, `bfill`- Filling with a model - e.g. linear interpolation, by mean of nearest observations etc. - `sklearn` in next week can impute data Duplicates Duplicates in data (1)*What does it mean there are duplicates in the data?* - More than one entry where the should be only one.- If for a certain set of variables the combination is repeated. Duplicates in data (2)*How do we drop duplicates?*
###Code
# print(str_ser3.duplicated())
print(str_ser3.drop_duplicates())
###Output
0 B.Sc. Political Science
1 Secondary school
2 High school
dtype: object
###Markdown
More datatypes- Spatial data with `geopandas` (GeoSeries, GeoDataFrame) - Has methods for working with shapes- You can define your own - e.g. price, temperature, energy (change currency, measure etc.) - What we do not cover: networks The end[Return to agenda](Agenda) Code for plots Load software
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
import seaborn as sns
plt.style.use('ggplot')
%matplotlib inline
SMALL_SIZE = 16
MEDIUM_SIZE = 18
BIGGER_SIZE = 20
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
###Output
_____no_output_____
###Markdown
Data structuring, part 2 The Pandas way*Andreas Bjerre-Nielsen* Recap*What do we know about explanatory plotting?*- Eye candy, self-explanatory- Minimal (only the necessary), layered grammer of graphics- Be aware of message*What do we know about exploratory plotting?*- seaborn is very good, Motivation*Reminder: Why do we want to learn data structuring?*- We have to do it, data is almost never cleaned- No one can and will do it for us- Even as a manager of data scientists - we need to know AgendaWe will learn about new data types 1. [string data](String-data)1. [temporal data](Temporal-data)1. [missing data](Missing-data) 1. [useful tools](Useful-tools) Loading the software
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
String data String operations vectorized (1)*Quiz: Which operators could work for string?* Operators **+**, **+=**. Example:
###Code
str_ser1 = pd.Series(['Andreas', 'Snorre', 'Ulf'])
str_ser1 + ' works @ SODAS'
###Output
_____no_output_____
###Markdown
String operations vectorized (2)Addition also work for two series
###Code
# adding two series together is also possible
str_ser2 = pd.Series(['Bjerre-Nielsen', 'Ralund', 'Aslak'])
str_ser1 + ' ' + str_ser2
###Output
_____no_output_____
###Markdown
String operations vectorized (3)The powerful .str has several powerful methods e.g. `contains`, `capitalize`. Example:
###Code
str_ser1.str.upper()
str_ser1.str.contains('e')
###Output
_____no_output_____
###Markdown
String operations vectorized (4) The .str methods include slicing - example:
###Code
str_ser2.str[1:4]
###Output
_____no_output_____
###Markdown
String operations vectorized (5)Many more `str` methods in pandas,- most basic strings methods translate directly- see Table 7-5 in PDA for an overview Categorical data type (1)*Are string columns smart for storage and speed?* No, sometimes it is better to convert to categorical data type:- use categorical when many characters and repeated. Categorical data type (2)*How do we convert to categorical?*
###Code
edu_list = ['B.Sc. Political Science', 'Secondary school'] + ['High school']*2
edu_cats = ['Secondary school', 'High school', 'B.Sc. Political Science']
str_ser3 = pd.Series(edu_list)
# option 1
cats = pd.Categorical(str_ser3, categories=edu_cats, ordered=True)
cat_ser = pd.Series(cats, index=str_ser3)
# option 2 - no order - fast
cat_ser2 = str_ser3.astype('category')
###Output
_____no_output_____
###Markdown
Categorical data type (3)*How do we work with categorical data?* - Using the `cat` attribute of series. Has a few methods. E.g. `.cat.codes`
###Code
print(cat_ser)
print()
print(cat_ser.cat.codes)
###Output
B.Sc. Political Science B.Sc. Political Science
Secondary school Secondary school
High school High school
High school High school
dtype: category
Categories (3, object): [Secondary school < High school < B.Sc. Political Science]
B.Sc. Political Science 2
Secondary school 0
High school 1
High school 1
dtype: int8
###Markdown
*Why categorical?*- Storage and faster computation (sometimes) - Allows for ordering strings Temporal data Temporal data type (1)*Why is time so fundamental?* Every measurement made by humans was made at a point in time, therefore it has a "timestamp". Temporal data type (2)*How are timestamps measured?* 1. Datetime (ISO 8601): standard calendar - year, month, day: minute, second, miliseconds etc. [timezone] - comes as strings in raw data2. Epoch time: seconds since January 1, 1970 - 00:00, GMT. - nanoseconds in pandas Temporal data type (3)*Does Pandas store it in a smart way?* Pandas has native support for temporal data combining datetime and epoch time.
###Code
str_ser4 = pd.Series(['20170101', '20170727', '20170803', '20171224'])
dt_ser1 = pd.to_datetime(str_ser4)
print(dt_ser1)
print(dt_ser1.astype(np.int64))
###Output
0 2017-01-01
1 2017-07-27
2 2017-08-03
3 2017-12-24
dtype: datetime64[ns]
0 1483228800000000000
1 1501113600000000000
2 1501718400000000000
3 1514073600000000000
dtype: int64
###Markdown
Temporal data type (4)*How does the input type matter for how datatime is parsed?*
###Code
print(pd.to_datetime(['20170101', '20170102']))
print(pd.to_datetime([20170101, 20170102]))
###Output
DatetimeIndex(['2017-01-01', '2017-01-02'], dtype='datetime64[ns]', freq=None)
DatetimeIndex(['1970-01-01 00:00:00.020170101', '1970-01-01 00:00:00.020170102'], dtype='datetime64[ns]', freq=None)
###Markdown
Time series (1)*Why is temporal data powerful?* We can easily make and plot time series.
###Code
T = 1000
data = {v:np.cumsum(np.random.randn(T)) for v in ['A', 'B']}
data['time'] = pd.date_range(start='20150101', freq='D', periods=T)
ts_df = pd.DataFrame(data)
# print(ts_df.head())
ts_df.set_index('time').plot(figsize=(10,5))
###Output
_____no_output_____
###Markdown
Time series (2)*Why is pandas good at time data?* It handles irregular data well: - missing values;- duplicate entries. It has specific tools for resampling and interpolating data- See 11.3, 11.5, 11.6 in PDA book. Datetime variables (1)*What other uses might time data have?* We can extract data from datetime columns. These columns have the `dt` attribute and its sub-methods. Example:
###Code
dt_ser2 = ts_df.time
# dt_ser2.dt.day.iloc[500:505]
dt_ser2.dt.year.head(3)
###Output
_____no_output_____
###Markdown
Datetime variables (2)The `dt` sub-methods include `year`, `weekday`, `hour`, `second`. *To note:* Your temporal data may need conversion. `dt` includes `tz_localize` and `tz_convert` which does that. Datetime variables (3)*Quiz: What are you to do if get time data with numbers of around 1-2 billion?* It is likely to be epoch time measured in seconds. We can convert it as follows:
###Code
pd.to_datetime([123512321,2132321321], unit='s')
###Output
_____no_output_____
###Markdown
.. Missing data Missing data type (1)*Which data type have we not covered yet?* Missing data, i.e. empty observations.- In python: `None`- In pandas: numpy's 'Not a Number', abbreviated `NaN` or `nan` Missing data type (2)*What does a DataFrame with missing data look like?*
###Code
nan_data = [[1,np.nan,3],
[4,5,None],
[7,8,9]]
nan_df = pd.DataFrame(nan_data, columns=['A','B','C'])
# print(nan_df)
print(nan_df.isnull().sum())
###Output
A 0
B 1
C 1
dtype: int64
###Markdown
Handling missing data*What options do we in working with missing data?* 1. Ignore the problem2. Drop missing data: columns and/or rows3. Fill in the blanks4. If time and money permits: collect the data or new data Removing missing data (1)*How do we remove data?* Using the `dropna` method.
###Code
print(nan_df)
print()
print(nan_df.dropna(axis=1)) # subset=['B'], axis=1
###Output
A B C
0 1 NaN 3.0
1 4 5.0 NaN
2 7 8.0 9.0
A
0 1
1 4
2 7
###Markdown
Filling missing data (1)*How do we fill observations with a constant?*
###Code
print(nan_df.fillna(2))
selection = nan_df.B.isnull()
nan_df.loc[selection, 'B'] = -99
print(nan_df)
###Output
A B C
0 1 -99.0 3.0
1 4 5.0 2.0
2 7 8.0 9.0
A B C
0 1 -99.0 3.0
1 4 5.0 NaN
2 7 8.0 9.0
###Markdown
Note: we can also select missing `isnull` and the replace values using `loc`. Filling missing data (2)*Are there other methods?* Yes, many methods:- Filling sorted temporal data, see `ffill`, `bfill`- Filling with a model - e.g. linear interpolation, by mean of nearest observations etc. - `sklearn` in next week can impute data Useful tools Duplicates in data (1)*What does it mean there are duplicates in the data?* - More than one entry where the should be only one.- If for a certain set of variables the combination is repeated. Duplicates in data (2)*How do we drop duplicates?*
###Code
str_ser3
str_ser3.drop_duplicates()
###Output
_____no_output_____
###Markdown
Duplicates in data (3)*How do we use duplicates?* Tomorrow morning we will get introduced to groupby which can be used to compute various statistics (e.g. mean, median) Binning numerical data*Can we convert our numerical data to bins in a smart way?* Yes, to methods are useful: - `cut` which divides data by user specified bins- `qcut` which divides data by user specified quantiles - e.g. median, $q=0.5$; lower quartile threshold, $q=0.25$.
###Code
x = pd.Series(np.random.normal(size=10**6))
cat_ser3 = pd.qcut(x, q=[0,.95,1])
cat_ser3.cat.categories
###Output
_____no_output_____
|
.ipynb_checkpoints/Institutional Scenarios-checkpoint.ipynb
|
###Markdown
BP https://www.bp.com/en/global/corporate/energy-economics/energy-outlook.html
###Code
df = pd.read_excel("data/bp-energy-outlook-2020-chart-data-pack.xlsx",
sheet_name = "Gas Share",
skiprows =3, index_col = "Scenarios")
df
#fix, ax = plt.subplots()
df.T.plot()
plt.title("Gas share in global power generation \n BP Energy Outlook 2020")
plt.ylabel("Share (0-1")
#ax.yaxis.set_major_formatter(mtick.PercentFormatter())
plt.savefig("figures/Gas share in global power generation BP Energy Outlook",
dpi = 300)
plt.show()
###Output
_____no_output_____
###Markdown
IEA
###Code
df = pd.read_excel("data/NZE2021_AnnexA.xlsx",
sheet_name = "World_Elec",
skiprows=4,
usecols=[0,1,2,3,4,5],
index_col = "Fuels")
df = df.iloc[0:19]
#Take a subset of only gas-based generation
df = df.loc[["Unabated Natural gas","Natural gas with CCUS", "Total generation"]]
df=df.T
df
df["Unabated Natural gas share"] = df["Unabated Natural gas"]/df["Total generation"]
df["Natural gas with CCUS share"] = df["Natural gas with CCUS"]/df["Total generation"]
df
df.to_excel("data/IEA Gas NZE.xlsx")
df.iloc[:,:2].plot(kind = "area", stacked = True)
plt.ylabel("TWh")
plt.title("Global Electricitiy Generation from Gas\n IEA Net Zero Scenario 2021")
plt.savefig("figures/Global electricity generation from gas in IEA Net Zero 2021 scenario", dpi = 300)
plt.show()
df.iloc[:,3:].plot(kind = "area", stacked = True)
plt.ylabel("Share (0-1")
plt.title("Gas share in global electricitiy generation\n IEA Net Zero Scenario 2021")
plt.savefig("figures/Gas share in global electricity generation IEA Net Zero 2021 scenario", dpi = 300)
plt.show()
###Output
_____no_output_____
###Markdown
Shell https://www.shell.com/energy-and-innovation/the-energy-future/scenarios/the-energy-transformation-scenarios.htmliframe=L3dlYmFwcHMvU2NlbmFyaW9zX2xvbmdfaG9yaXpvbnMv
###Code
file = "data\shell-energy-transformation-scenarios-summary-data.xlsx"
shell = pd.read_excel(file, sheet_name = "Sheet1", skiprows = 23)
shell.columns=shell.iloc[0]
shell.drop(shell.index[0], axis=0, inplace=True)
shell.set_index("Scenarios",inplace=True)
shell.columns = shell.columns.astype(int)
shell= shell.T
shell
shell.plot()
plt.title("Gas share in global electricity generation \n Shell Energy Transformation Scenarios 2021")
plt.ylabel("Share (0-1")
plt.xlabel("Years")
plt.savefig("figures/Gas share in global electricity generation Shell", dpi = 300)
plt.show()
###Output
_____no_output_____
|
_unittests/ut_helpgen/data_gallery/notebooks/2a/notebook_convert.ipynb
|
###Markdown
Convert a notebook into a document First, we need to retrieve the notebook name (see [How to I get the current IPython Notebook name](http://stackoverflow.com/questions/12544056/how-to-i-get-the-current-ipython-notebook-name)):
###Code
%%javascript
var kernel = IPython.notebook.kernel;
var body = document.body,
attribs = body.attributes;
var command = "theNotebook = " + "'"+attribs['data-notebook-name'].value+"'";
kernel.execute(command);
if "theNotebook" in locals():
a=theNotebook
else:
a="pas trouvé"
a
###Output
_____no_output_____
###Markdown
On Windows, you might need to execute the following trick (see [Pywin32 does not find its DLL](http://www.xavierdupre.fr/blog/2014-07-01_nojs.html)).
###Code
from pyquickhelper.helpgen.utils_pywin32 import import_pywin32
import_pywin32()
###Output
_____no_output_____
###Markdown
Then, we call the following code:
###Code
from nbconvert import HTMLExporter
exportHtml = HTMLExporter()
if a != "pas trouvé":
body,resources = exportHtml.from_filename(theNotebook)
with open("conv_notebook.html","w",encoding="utf8") as f : f.write(body)
###Output
_____no_output_____
###Markdown
We can do it with the RST format (see [RSTExporter](https://nbconvert.readthedocs.io/en/latest/api/exporters.html?highlight=rstexporternbconvert.exporters.RSTExporter)).
###Code
from nbconvert import RSTExporter
exportRst = RSTExporter()
if a != "pas trouvé":
body,resources = exportRst.from_filename(theNotebook)
with open("conv_notebook.rst","w",encoding="utf8") as f : f.write(body)
###Output
_____no_output_____
###Markdown
If you need to add custom RST instructions, you could add HTML comments:```` And write custom code to add it to your RST file. Finally, if you want to retrieve the download a local file such as the RST conversion for example:
###Code
from IPython.display import FileLink
FileLink("conv_notebook.rst")
###Output
_____no_output_____
|
module1-statistics-probability-and-inference/GGA_131_v4_asnmt_LS_DS_131_Statistics_Probability_Assignment.ipynb
|
###Markdown
*Data Science Unit 1 Sprint 3 Assignment 1* Apply the t-test to real dataYour assignment is to determine which issues have "statistically significant" differences between political parties in this [1980s congressional voting data](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records). The data consists of 435 instances (one for each congressperson), a class (democrat or republican), and 16 binary attributes (yes or no for voting for or against certain issues). Be aware - there are missing values!Your goals:1. Load and clean the data (or determine the best method to drop observations when running tests)2. Using hypothesis testing, find an issue that democrats support more than republicans with p < 0.013. Using hypothesis testing, find an issue that republicans support more than democrats with p < 0.014. Using hypothesis testing, find an issue where the difference between republicans and democrats has p > 0.1 (i.e. there may not be much of a difference)Note that this data will involve *2 sample* t-tests, because you're comparing averages across two groups (republicans and democrats) rather than a single group against a null hypothesis.Stretch goals:1. Refactor your code into functions so it's easy to rerun with arbitrary variables2. Apply hypothesis testing to your personal project data (for the purposes of this notebook you can type a summary of the hypothesis you formed and tested) +I'm adding to the assignment that when you have already done what's asked of you there, before you move onto the other stretch goals, that:5. You also practice some 1-sample t-tests6. You try and create some kind of a visualization that communicates the results of your hypothesis tests. This can be as simple as a histogram of the p-values or the t-statistics. Part 1: Load & Clean The Data
###Code
### YOUR CODE STARTS HERE
#importing libraries
import pandas as pd
import numpy as np
from scipy.stats import ttest_ind, ttest_ind_from_stats, ttest_rel, t, ttest_1samp
import seaborn as sns
from matplotlib import style
import matplotlib.pyplot as plt
#loading file
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data
# checking to see what files are in the current main directory
!ls
# Loading data, creating dataframe df with a custom header
df = pd.read_csv('house-votes-84.data',
header=None,
names=['party','handicapped-infants','water-project',
'budget','physician-fee-freeze', 'el-salvador-aid',
'religious-groups','anti-satellite-ban',
'aid-to-contras','mx-missile','immigration',
'synfuels', 'education', 'right-to-sue','crime','duty-free',
'south-africa'])
#inspecting data
df.shape
#inspecting data
df.head(5)
###Output
_____no_output_____
###Markdown
Here, you can see question marks, that is no good. We must a do something about a that. Next we need to replace the question marks with NaN values - AND set those NaN value back into a new recast DataFrame df. Also we will change the string yes, no (yea, nay) votes to binary 1,0 integers.
###Code
###Output
_____no_output_____
###Markdown
###Code
# cleaning: +NaN, string to int
df = df.replace({'?':np.NaN, 'n':0, 'y':1})
#inspecting
df.shape
#instecting
df.head(5)
# inspecting
# Looking over abstensions...
#
# "How long can the Britsh hang on in Gebralter,
# where the tapestries of simitared riders hunt tigers...
# clinging to the rocks like rock apes,
# clinging always to less and less."
df.isnull().sum()
#use "filtering" to create two new party based df (so much for nonpartisan dataframes...a sad day)
dem = df[df['party'] == 'democrat']
rep = df[df['party'] == 'republican']
#inspect
dem.head(5)
#inspect
rep.head(5)
###Output
_____no_output_____
###Markdown
2. Using hypothesis testing, find an issue that democrats support more than republicans with p < 0.01
###Code
print(ttest_ind(rep['handicapped-infants'], dem['handicapped-infants'], nan_policy='omit'))
print(rep['handicapped-infants'].mean(), 'Republican mean')
print(dem['handicapped-infants'].mean(), 'Democratic mean')
###Output
Ttest_indResult(statistic=-9.205264294809222, pvalue=1.613440327937243e-18)
0.18787878787878787 Republican mean
0.6046511627906976 Democratic mean
###Markdown
Here the small pvalue below .5, indicates a lack of similarity between the two means compared. Here the samll p-value indicates a difference beyond chance between the two. The null hypothesis was th Ttest_indResult(statistic=-9.205264294809222, pvalue=1.613440327937243e-18)0.18787878787878787 Republican mean0.6046511627906976 Democratic mean
###Code
handi = ttest_ind(rep['handicapped-infants'], dem['handicapped-infants'], nan_policy='omit')
#import matplotlib.pyplot as plt
y1 = rep['handicapped-infants'].dropna()
y2 = dem['handicapped-infants'].dropna()
fix, ax = plt.subplots()
for sample in [y1, y2]:
sns.distplot(sample)
###Output
_____no_output_____
###Markdown
A visual comparison of democratic and republican votes. 3. Using hypothesis testing, find an issue that republicans support more than democrats with p < 0.01
###Code
print(ttest_ind(rep['religious-groups'], dem['religious-groups'], nan_policy='omit'))
print(rep['religious-groups'].mean(), 'Republican mean')
print(dem['religious-groups'].mean(), 'Democratic mean')
###Output
Ttest_indResult(statistic=9.737575825219457, pvalue=2.3936722520597287e-20)
0.8975903614457831 Republican mean
0.47674418604651164 Democratic mean
###Markdown
Here the small pvalue below .5, indicates a lack of similarity between the two means compared.I think: The null hypothesis is that they are not different, and the result showing that they are different means strongly rejecting (the null hypothesis) that they are not different (a.k.a. this is evidence that they are are different (in a way beyond mere change result).Ttest_indResult(statistic=9.737575825219457, pvalue=2.3936722520597287e-20)0.8975903614457831 Republican mean0.47674418604651164 Democratic mean
###Code
#import matplotlib.pyplot as plt
y1 = rep['religious-groups'].dropna()
y2 = dem['religious-groups'].dropna()
fix, ax = plt.subplots()
for sample in [y1, y2]:
sns.distplot(sample)
###Output
_____no_output_____
###Markdown
A visual comparison of democratic and republican votes. 4. Using hypothesis testing, find an issue where the difference between republicans and democrats has p > 0.1 (i.e. there may not be much of a difference)
###Code
print(ttest_ind(rep['water-project'], dem['water-project'], nan_policy='omit'))
print(rep['water-project'].mean(), 'Republican mean')
print(dem['water-project'].mean(), 'Democratic mean')
###Output
Ttest_indResult(statistic=0.08896538137868286, pvalue=0.9291556823993485)
0.5067567567567568 Republican mean
0.502092050209205 Democratic mean
###Markdown
Here the pvalue above .5, indicates a similarity between the two means compared.I think: The null hypothesis is that they are not different, and the result showing that they are very similar falls in the catagory of failing to disprove (the null hypothesis) that they are different (a.k.a. this is evidence that they are similar).And the T value being near 0 shows they are similar...Ttest_indResult(statistic=0.08896538137868286, pvalue=0.9291556823993485)0.5067567567567568 Republican mean0.502092050209205 Democratic mean
###Code
#import matplotlib.pyplot as plt
y1 = rep['water-project'].dropna()
y2 = dem['water-project'].dropna()
fix, ax = plt.subplots()
for sample in [y1, y2]:
sns.distplot(sample)
###Output
_____no_output_____
###Markdown
A visual comparison of democratic and republican votes. 5. Practice some 1-sample t-tests
###Code
#single sample t-tests
# passing nan_policy='omit'
ttest_1samp(rep['budget'], 0, nan_policy='omit')
#single sample t-tests
# passing nan_policy='omit'
ttest_1samp(rep['budget'], 1, nan_policy='omit')
ttest_1samp(dem['budget'], 0, nan_policy='omit')
ttest_1samp(dem['budget'], 1, nan_policy='omit')
ttest_1samp(rep['water-project'], 0, nan_policy='omit')
ttest_1samp(dem['water-project'], 0, nan_policy='omit')
###Output
_____no_output_____
###Markdown
6. You try and create some kind of a visualization that communicates the results of your hypothesis tests. This can be as simple as a histogram of the p-values or the t-statistics. Ttest_indResult(statistic=9.737575825219457, pvalue=2.3936722520597287e-20)0.8975903614457831 Republican mean0.47674418604651164 Democratic meanTtest_indResult(statistic=0.08896538137868286, pvalue=0.9291556823993485)0.5067567567567568 Republican mean0.502092050209205 Democratic meanTtest_indResult(statistic=9.737575825219457, pvalue=2.3936722520597287e-20)0.8975903614457831 Republican mean0.47674418604651164 Democratic mean
###Code
#https://pythonspot.com/matplotlib-bar-chart/
import matplotlib.pyplot as plt; plt.rcdefaults()
objects = ('Handicaped Kids(D)', 'Water Bill', 'Religion(R)')
y_pos = np.arange(len(objects))
performance = [9.737575825219457,0.08896538137868286,9.737575825219457]
plt.bar(y_pos, performance, align='center', alpha=0.5)
plt.xticks(y_pos, objects)
plt.ylabel('T-Scores of Political Votes')
plt.title('Republicans & Democrats Sometimes Agree in 1984')
###Output
_____no_output_____
|
notebooks/1.0-bl-initial-data-exploration.ipynb
|
###Markdown
Plot Image Size
###Code
plot_image_dimensions('../data/raw/pc/','../data/raw/dn/')
###Output
[########################################] | 100% Completed | 0.8s
###Markdown
View Images
###Code
show_image_sample('../data/raw/pc/','../data/raw/dn/')
###Output
next_blight_pix= ['../data/raw/pc/LBM_70_5Jan_PC.jpg', '../data/raw/pc/LBM_70B_5Jan_PC.jpg', '../data/raw/pc/DBM_55_13Jan_PC.jpg', '../data/raw/pc/DBM_58_13Jan_PC.jpg']
|
src/wxyz_notebooks/src/wxyz/notebooks/API/JSON-LD.ipynb
|
###Markdown
TODO: investigate in nbconvert frame = Frame(doc, frame=good_frame)assert not frame.errorassert frame.value["@graph"][0]["name"], frame.value TODO: investigate in nbconvert frame = Frame(doc, frame=bad_frame)assert not frame.errorassert not frame.value["@graph"][0]["name"], frame.value
###Code
normalized = Normalize(doc)
assert not normalized.error
assert normalized.value
###Output
_____no_output_____
|
transformation.ipynb
|
###Markdown
Transformation
###Code
import numpy as np
import matplotlib as mpl
mpl.use('Agg') # Required to redirect locally
import matplotlib.pyplot as plt
plt.figure(figsize=(10,10))
%matplotlib inline
from pylab import rcParams
rcParams['figure.figsize'] = 6, 6
rcParams.update({'font.size': 22})
###Output
_____no_output_____
###Markdown
Vector $\vec{v}$ has coordinates $x$ and $y$ in base $b$.$$\vec{v} = \begin{bmatrix}x\\y\end{bmatrix}_{b}$$Base $b$ is defined by unit vectors $\hat{i}$ and $\hat{j}$.$$ b = \begin{Bmatrix} \hat{i} & \hat{j} \end{Bmatrix} $$
###Code
x = 1
y = 2
v_b = [x,y]
fig,ax = plt.subplots()
a = ax.arrow(0,0,x,y,head_width=0.1, head_length=0.1)
a = ax.text(1.10*x,1.10*y,r'$\vec{v}$')
a = ax.set_xlim(0,3)
a = ax.set_ylim(0,3)
a = ax.set_xlabel('$\hat{i}$')
a = ax.set_ylabel('$\hat{j}$')
a = ax.set_title(r'Vector $\vec{v}$ in base b')
ax.grid(True)
###Output
_____no_output_____
###Markdown
Unit vectors $\hat{i}$ and $\hat{j}$ has coordinates $X$ and $Y$ in another base $B$$$\hat{i} = \begin{bmatrix}cos(\phi)\\sin(\phi)\end{bmatrix}_{B}$$$$\hat{j} = \begin{bmatrix}-sin(\phi)\\cos(\phi)\end{bmatrix}_{B}$$Base $B$ is defined by unit vectors $\hat{I}$ and $\hat{J}$.$$ B = \begin{Bmatrix} \hat{I} & \hat{J} \end{Bmatrix} $$A transformation matrix $R$ from base $b$ to $B$ can be created using $\hat{i}$ and $\hat{j}$.$$R = \begin{bmatrix}\hat{i} & \hat{j}\end{bmatrix}_{B} = \begin{bmatrix}cos(\phi) & -sin(\phi) \\ sin(\phi) & cos(\phi) \end{bmatrix}_{B}$$The transformation is performed with matrix multiplication.$$ \vec{v} = \begin{bmatrix}x\\y\end{bmatrix}_{b} = \begin{bmatrix}cos(\phi) & -sin(\phi) \\ sin(\phi) & cos(\phi) \end{bmatrix}_{B} \cdot \begin{bmatrix}x\\y\end{bmatrix}_{b} = \begin{bmatrix}X\\Y\end{bmatrix}_{B} $$
###Code
from numpy import cos as cos
from numpy import sin as sin
phi = np.deg2rad(25)
i_B = np.array([[cos(phi)],[sin(phi)]])
i_B
j_B = np.array([[-sin(phi)],[cos(phi)]])
j_B
R = np.concatenate((i_B,j_B),axis = 1)
R
v_B = R.dot(v_b)
X = v_B[0]
Y = v_B[1]
i_B[1,0]
fig,ax = plt.subplots()
a = ax.arrow(0,0,X,Y,head_width=0.1, head_length=0.1)
a = ax.text(1.10*X,1.10*Y,r'$\vec{v}$')
a = ax.arrow(0,0,i_B[0,0],i_B[1,0],head_width=0.1, head_length=0.1)
a = ax.text(1.20*i_B[0,0],1.20*i_B[1,0],'$\hat{i}$')
a = ax.arrow(0,0,j_B[0,0],j_B[1,0],head_width=0.1, head_length=0.1)
a = ax.text(1.20*j_B[0,0],1.20*j_B[1,0],'$\hat{j}$')
a = ax.set_xlim(-1,3)
a = ax.set_ylim(-1,3)
a = ax.set_xlabel('$\hat{I}$')
a = ax.set_ylabel('$\hat{J}$')
a = ax.set_title(r'Vector $\vec{v}$ in base B')
ax.grid(True)
###Output
_____no_output_____
|
Deep Learning/NLP Tutorials/word_vectors_demo.ipynb
|
###Markdown
Creating word vectors using word2vec Load dependencies
###Code
import nltk
from nltk import word_tokenize, sent_tokenize
import gensim
from gensim.models.word2vec import Word2Vec
from sklearn.manifold import TSNE
import pandas as pd
from bokeh.io import output_notebook
from bokeh.plotting import show, figure
%matplotlib inline
nltk.download('punkt') # English-language sentence tokenizer (not all periods end sentences; not all sentences start with a capital letter)
nltk.download('gutenberg')
from nltk.corpus import gutenberg
len(gutenberg.fileids())
gutenberg.fileids()
###Output
_____no_output_____
###Markdown
Tokenize the text
###Code
gberg_sent_tokens = sent_tokenize(gutenberg.raw())
len(gberg_sent_tokens)
gberg_sent_tokens[1]
word_tokenize(gberg_sent_tokens[])
# a convenient method that handles newlines, as well as tokenizing sentences and words in one shot
gberg_sents = gutenberg.sents()
gberg_sents[:5]
gberg_sents[4][14]
len(gutenberg.words())
###Output
_____no_output_____
###Markdown
Run word2vec
###Code
# model = Word2Vec(sentences=gberg_sents, size=64, sg=1, window=10, min_count=5, seed=42)
# model.save('raw_gutenberg_model.w2v')
model = gensim.models.Word2Vec.load('raw_gutenberg_model.w2v')
model.wv['dog']
len(model.wv['dog'])
model.wv.most_similar('dog')
words = list(model.wv.vocab)
print len(words)
model.wv.most_similar('father')
###Output
_____no_output_____
###Markdown
Reduce word vector dimensionality
###Code
X = model.wv[model.wv.vocab]
# tsne = TSNE(n_components=2, n_iter=1000, verbose=True) # 200 is minimum iter; default is 1000
# X_2d = tsne.fit_transform(X)
# coords_df = pd.DataFrame(X_2d, columns=['x','y'])
# coords_df['token'] = model.wv.vocab.keys()
# coords_df.to_csv('raw_gutenberg_tsne.csv', index=False)
###Output
_____no_output_____
###Markdown
Visualize 2d representation of word vectors
###Code
coords_df = pd.read_csv('raw_gutenberg_tsne.csv')
_ = coords_df.plot.scatter('x', 'y', figsize=(12,12), marker='.', s=10, alpha=0.2)
output_notebook() # output bokeh plots inline in notebook
subset_df = coords_df.sample(n=5000)
p = figure(plot_width=800, plot_height=800)
_ = p.text(x=subset_df.x, y=subset_df.y, text=subset_df.token)
show(p)
###Output
_____no_output_____
|
PDA/jupyter/jupyterNotebooks/07Course_closing.ipynb
|
###Markdown
Programming and Data Analytics 1 2021/2022Sant'Anna School of Advanced Studies, Pisa, ItalyCourse responsibleAndrea Vandin [email protected] Daniele Licari [email protected] Lecture 7: Course closing and Advanced Libraries fordata manipulation/visualizationOverview of NumPy & Pandas --- Kahoot: how much do you know about the topics of the last class?
###Code
from IPython.display import Image, display
url_github_repo="https://github.com/EMbeDS-education/StatsAndComputing20212022/raw/main/PDA/"
img=Image(url_github_repo+'jupyter/jupyterNotebooks/images/kahootClass5.JPG',width=700)
display(img)
###Output
_____no_output_____
###Markdown
* Using your phone or a different display go to [https://kahoot.it/](https://kahoot.it/)* Type the given PIN
###Code
from IPython.display import IFrame
IFrame("https://kahoot.it/", 500, 400)
###Output
_____no_output_____
###Markdown
--- What have we learned?
###Code
img=Image(url_github_repo+'jupyter/jupyterNotebooks/images/lastKahootMeme.JPG',width=700)
display(img)
img=Image(url_github_repo+'jupyter/jupyterNotebooks/images/tentativeLecturePlan.png',width=700)
display(img)
###Output
_____no_output_____
###Markdown
Lecture Plan| Class | Date | Time | Topic ||:----------:|-----------------------------|--|--||1| 14/02 | 15:00-17:00 | Course introduction ||2| 16/02 | 15:00-18:00 | Data types & operations ||3| 18/02 | 15:00-18:00 | Collections & First taste of plots ||4| 21/02 | 15:00-18:00 | Control statements (if, loops) & CSV manipulation/visualization on COVID-19 data | |5| 25/02 | 15:00-18:00 | Functions & Applications to analysis of epidemiological models & to Generation of WordClouds from online news ||6| 28/02 | 15:00-18:00 | Modules & Exceptions & OOP & Applications to betting markets (ABM models) ||7| 04/03 | 15:00-18:00 | Advanced libraries for data manipulation (NumPy, Pandas) & Application to COVID-19 and Finance data |> Note: we created this table using Markdown. > [There are also online table generators](https://www.tablesgenerator.com/markdown_tables) We managed to * Cover all topics* Give applications* Let you improve and monitor your learning process using online platforms * Assignments on Repl.it * Kahoot quizzes This is how we described the course during the first class
###Code
from IPython.display import Image, display
img=Image(filename='images/courseDescription.png',width=800)
display(img)
from IPython.display import Image, display
img=Image(filename='images/learningObjectives.png',width=800)
display(img)
###Output
_____no_output_____
###Markdown
The final exam __The exam__* The exam is an oral examination with Daniele and me* We will do it remotely using WebEx* The oral examination will start discussing some of the solutions you gave to the assignments, * From there we might ask you additional questions about topics covered in the course * __Therefore you should complete the assignments!____Who is interested in the exam?__* 32 students marked their interest in doing the exam in our [Google sheet](https://docs.google.com/spreadsheets/d/1WcQb87uaSC7RKZVT5fRB6vHftkTrEWEyGo1npdBxxVg/editgid=1960976968) * Do it now if you have not done it yet * No free slots? Please send us an email.* Please book your time slot here [Exam Schedule](https://docs.google.com/spreadsheets/d/1SFmOkrWf82_WMKvZEGwa0GnIM_zljlLOxFRnkhjoX3k/editgid=0)* On the day and time of your exam - Follow this link to my WebEx virtual room: https://sssup.webex.com/meet/a.vandin - We have a tight schedule. __Be there on time!__
###Code
from IPython.display import IFrame
IFrame("https://docs.google.com/spreadsheets/d/16JrCpCim8a-c2kJX6w4YrtQsWx5gFiL33rIWfw4YMwU/edit#gid=0", 1000, 800)
###Output
_____no_output_____
|
Week2-Introduction-to-Open-Data-Importing-Data-and-Basic-Data-Wrangling/Week2-Introduction-to-Data-Manipulation.ipynb
|
###Markdown
Data Manipulation with Pandas Table of Contents- Why, Where and How we use Pandas - What is Pandas? - Data structures in Pandas- What we will be learning today - About the dataset - Goals- Importing Pandas library- Loading a file- Setting an index- Getting info about the dataset- Removing NaN (None) values - **1.0 - Now Try This**- Removing a column- Selecting subsets of data - **2.0 - Now Try This** - **3.0 - Now Try This** - **4.0 - Now Try This**- Filtering dataset based on criteria - **5.0 - Now Try This** - **6.0 - Now Try This**- Aggregation functions - Sum - Min / Max - Mean - **7.0 - Now Try This**- Practical Exercise - About the dataset - Setting an index - **8.0 - Now Try This** - Aggregate functions - **9.0 - Now Try This** - **10.0 - Now Try This** - **11.0 - Now Try This** Why, Where and How we use Pandas What is Pandas?This week, we will cover the basic data manipulation using Pandas.1. Pandas is an open-source data analysis and manipulation tool and it is widely used both in academia and industry.2. It is built on top of the Python programming language. 3. It offers data structures and operations for manipulating numerical tables and time series. Data structures in PandasPandas provides three data structures: Series, DataFrame, and Panel. 1. A Series is a 1-dimensional labeled array and a 1-dimensional array represents a single column in excel. It can hold data of **any type** (integer, string, python objects, etc.) and its labels are called indices.2. A DataFrame is a 2-dimensional labeled data structure with both rows and columns and a 2-dimensional array represents tabluar data.3. A panel is a 3-dimensional. This week, we will focus on DataFrame and we will learn Series in later weeks. We will not cover Panel in this semester, as it's not used as often as two other data structures.Since we've covered the fundamentals of Python, it will be fairly easy to pick up Pandas. What we will be learning today Goals:- Getting a quick overview of the dataset - Removing column / rows with NaN values- Selecting and filtering based on criteria- Analyze the survival rates in the Titanic dataset About the dataset: Titanic To begin with Pandas and dataframes, we will use a dataset about the Titanic. Titanic was a British passenger liner operated by the White Star Line that sank in the North Atlantic Ocean in 1912, after striking an iceberg during her maiden voyage from Southampton to New York City. Of the estimated 2,224 passengers and crew aboard, more than 1,500 died.This dataset does not have all of the passengers, but has the following info for a third of all passengers aboard: name, age, gender, ticket price, and most importantly whether or not they survived.As each person has its unique PassengerID, each row is a unique entity / passenger. Import Pandas libraryTo read / load a file, we will need to import Pandas.It's a convention to use ``` import pandas as pd``` when importing Pandas library.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Once we've imported Pandas, we can use ``` pd``` to call any functions in Pandas. Load fileTo read the csv file with our data, we will use the ```read_csv``` function.Since we are working with only one dataset, we will just call dataframe as df. But, if we are working with lots of dataframes, it's better to give a meaningful name (ex: titanic_data, passenger_info, etc.)
###Code
url = 'https://raw.githubusercontent.com/bitprj/Bitcamp-DataSci/master/Week2-Introduction-to-Open-Data-Importing-Data-and-Basic-Data-Wrangling/data/titanic-dataset.csv'
df = pd.read_csv(url)
df
###Output
_____no_output_____
###Markdown
Set indexNow, the dataset is loaded as a dataframe 'df'The first column is an index column and it starts from 0 by default.But, as you can tell, PassengerId itself is a unique index. So, let's set PassengerId as an index.We can call ```set_index``` function and specify the index using ```keys=```
###Code
df.set_index(keys='PassengerId')
###Output
_____no_output_____
###Markdown
The code above worked! Now PassengerId is a new index for df.Let's call df one more time to make sure that df has been updated to reflect the change.
###Code
df
###Output
_____no_output_____
###Markdown
IMPORTANT: df has NOT been updated. Do you know why?```df.set_index(keys='PassengerId')```: this function sets PassengerId as an index when we CALL the function. Since we didn't save the function call, df has NOT been updated.There are two ways to save the change.1. ```df = df.set_index(keys='PassengerId')```2. ```df.set_index(keys='PassengerId', inplace = True)```First function call reassigns a variable ```df``` to the updated ```df``` and second function call makes changes in-place.
###Code
df.set_index(keys='PassengerId', inplace=True)
df
###Output
_____no_output_____
###Markdown
Okay, now PassengerId is set to index! Basic info about the datasetNow, let's get basic information about the dataframe.- head()- describe()- info() head()```head()``` function is useful to see the dataset at a quick glance as it returns the first n rows.Let's check what columns this file has by calling ```head()``` function.By default, ```head()``` returns the first 5 rows.
###Code
df.head()
###Output
_____no_output_____
###Markdown
You can specify the number of rows to display by calling ```df.head(number)```
###Code
df.head(10)
###Output
_____no_output_____
###Markdown
info()Now, we know what's in the dataset and what it looks like.To summarize what information is available in the dataset, we can use the info() function.This function is useful as this returns all of the **column names** and **its types** as well as **Non-Null** counts.
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 891 entries, 1 to 891
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Survived 891 non-null int64
1 Pclass 891 non-null int64
2 Name 891 non-null object
3 Sex 891 non-null object
4 Age 714 non-null float64
5 SibSp 891 non-null int64
6 Parch 891 non-null int64
7 Ticket 891 non-null object
8 Fare 891 non-null float64
9 Cabin 204 non-null object
10 Embarked 889 non-null object
dtypes: float64(2), int64(4), object(5)
memory usage: 83.5+ KB
###Markdown
We can tell that "Age" and "Cabin" have lots of missing values; the dataset only has data for 714 ages and 204 cabins for the 891 passengers.If we take a closer look at dtypes in the second to last row, there are three dtypes: ```int64```, ```float64```, and ```object```.We have covered ```int``` and ```float``` last week in Python, but what is an object?- ```int64```: integer numbers- ```float64```: floating point numbers- ```object```: string or mixed numeric and non-numeric values.That's why the dtype of "Name", "Sex", "Embarked" is ```object```, as it is a string."Ticket" and "Cabin" are ```objects``` as they are in a format of numbers or string + numbers (Ex: A/5 21171, C85) describe()```describe()``` is used to view summary statistics of numeric columns. This helps us to have a general idea of the dataset.```Count```, ```mean```, ```min```, and ```max``` are straightforward.Let's refresh our memory with statistical concepts.- ```std```: standard deviation - measures the dispersion of a dataset relative to its mean. If the data points are further from the mean, there is a higher deviation within the dataset. The more spread out the data, the higher the standard deviation.- ```25%```: the value below which 25% of the observations may be found. - ```50%```: the value below which 50% of the observations may be found. - ```75%```: the value below which 75% of the observations may be found. For example, 25th percentile of age is 20.125 and 75th percentile of age is 38. This means that 25% of the passengers' age is less than 20.125 and 75% of the passengers' age is less than 38.
###Code
df.describe()
###Output
_____no_output_____
###Markdown
shapeTo see the size of the dataset, we can use ```shape``` function, which returns the number of rows and columns in a format of ( rows, columns)This dataset has 891 rows (entities) and 12 columns.
###Code
df.shape
###Output
_____no_output_____
###Markdown
Remove NaN valuesOftentimes, when we work with large datasets, we will encounter cases where there are lots of missing elements (NaN / null) in the dataset.Removing NaN values will allow us to drop the rows and to work with clean datasets.---Let's remove the rows that do not provide meaningful information.When we know a "unique key" of the dataset (PassengerID in this dataset), we can check whether all elements have PassengerID. If any of the rows are missing PassengerID, then we can drop that entity.* ```df.dropna()```: drop the rows where at least one of the elements is missing.* ```df.dropna(how='all')```: drop the rows where all of the elements are missing.* ```df.dropna(subset=[columns])```: define in which columns to look for missing values. If we want to drop the rows with **at least** one missing element:
###Code
df.dropna()
###Output
_____no_output_____
###Markdown
If we want to drop the rows with **all** elements missing:
###Code
df.dropna(how='all')
###Output
_____no_output_____
###Markdown
If we want to drop the rows that are missing Survived value.
###Code
df.dropna(subset=['Survived'])
###Output
_____no_output_____
###Markdown
Yay, we've confirmed that all of the passengers have 'Survived' value since the number of rows remains the same.If we want to update the dataset after dropping rows, we can use ```inplace = True```
###Code
df.dropna(subset=['Survived'], inplace=True)
###Output
_____no_output_____
###Markdown
1.0 - Now Try This- Drop the rows that are missing any of the following columns: 'Pclass'- Update ```df``` Removing a columnBefore we dive into the data analysis, let's see if there are any columns we want to remove.
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 891 entries, 1 to 891
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Survived 891 non-null int64
1 Pclass 891 non-null int64
2 Name 891 non-null object
3 Sex 891 non-null object
4 Age 714 non-null float64
5 SibSp 891 non-null int64
6 Parch 891 non-null int64
7 Ticket 891 non-null object
8 Fare 891 non-null float64
9 Cabin 204 non-null object
10 Embarked 889 non-null object
dtypes: float64(2), int64(4), object(5)
memory usage: 83.5+ KB
###Markdown
In "Cabin" column, there are only 204 rows that are non-null. That means 891 - 204 = 687 rows are missing in the column.It wouldn't give us as meaningful insight as the other columns, so let's remove "Cabin" column by using ```del``` function.
###Code
del df['Cabin']
df
###Output
_____no_output_____
###Markdown
Select subsets of dataWhen we are interested in a few columns to do the data analysis, we can select a specific subset of columns using two methods:1. by index location2. by column names 1. by index locationWe can select specific subsets of data using ```iloc[rows_index, columns_index]```.As we learned last week, ```[:]``` selects everything in a list or string in Python. Similarly, ```[:]``` will select every row or column depending on where we put it.Let's select PassengerId, Survived, and Pclass.
###Code
# by index location (iloc)
df.iloc[: , [0,1,2]]
###Output
_____no_output_____
###Markdown
We selected 0, 1, and 2 because PassengerId, Survived, and Pclass are the first 3 columns.Hmm, but 4 columns showed up. Let's look into why!Because PassengerId is the default index, it shows up automatically.So, the index location 0 will be the first column right after the index column. 2.0 - Now Try ThisSelect PassengerId, Survived, and Pclass with all rows. 3.0 - Now Try ThisSelect PassengerId, Survived, and Pclass with all rows.Please use semi-colon ```:``` this time. 2. by column namesLet's select subsets of data by column names.We are interested in PassengerId, Survived, Sex, and Age.
###Code
# by column names
df[['PassengerId', 'Survived', 'Sex', 'Age']]
# (The error here is expected!)
###Output
_____no_output_____
###Markdown
The code above doesn't work. What does the KeyError say?```['PassengerId'] not in index.```Remember? PassengerId is no longer a column, so we can't select it by a column name. 4.0 - Now Try ThisSelect PassengerId, Survived, Sex, and Age. Filter Dataset based on criteriaOftentimes, we are interested in working with specific rows that meet certain criteria. If we only want to look at the data with Age > 30, we can specify the criteria within ```loc``` function.
###Code
df_over_30yrs=df.loc[df['Age'] > 30]
df_over_30yrs
###Output
_____no_output_____
###Markdown
Now, let's select the dataset using two criteria -- where "Age" is greater than 30 AND "Survived."```&``` is equivalent to ```AND``` and ```|``` is equivalent to ```OR``` in dataframe.
###Code
df_over_30yrs_survived = df.loc[(df['Age'] > 30) & (df['Survived'] == 1)]
df_over_30yrs_survived
###Output
_____no_output_____
###Markdown
IMPORTANT: When filtering with multiple conditions, make sure to use ```()``` on each condition. Otherwise, you will get an error message that ```The truth value of a Series is ambiguous.``` Let's check how many passengers survived among the ones whose age was over 30.
###Code
print("# of passengers whose age was over 30: ", df_over_30yrs.shape[0])
print("# of survived passengers whose age was over 30: ", df_over_30yrs_survived.shape[0])
###Output
# of passengers whose age was over 30: 305
# of survived passengers whose age was over 30: 124
###Markdown
5.0 - Now Try ThisSelect the dataset that meets the following condition:- Pclass is not 1 6.0 - Now Try ThisSelect the dataset that meets the following conditions:- Age is less than 10 - OR- Age is greater than 50Hint: Don't forget parenthesis! Aggregation functionsAggregation is the process of combining things. It's useful to understand the overall properties of the dataset and analyze it.Some examples of aggregation are ```sum()```, ```count()```, ```min()```, ```max()```, ```mean()```, ```std()```, etc. Sum 1. Total fares
###Code
df['Fare'].sum()
###Output
_____no_output_____
###Markdown
If we want to round total fares and save it as a variable, then we can try:
###Code
total_fares = df['Fare'].sum()
print("Total fares: ", round(total_fares))
###Output
Total fares: 28694.0
###Markdown
2. Survived passengersWe can also count the number of passengers that survived by summing up the 'Survived' column.
###Code
survived_passengers = df['Survived'].sum()
survived_passengers
###Output
_____no_output_____
###Markdown
Max / MinLet's calculate the max and min of Fare.
###Code
df['Fare'].max()
df['Fare'].min()
###Output
_____no_output_____
###Markdown
MeanLet's calculate the survival rate of all passengers.
###Code
df['Survived'].mean()
###Output
_____no_output_____
###Markdown
Let's calculate the average age of all passengers.
###Code
df['Age'].mean()
###Output
_____no_output_____
###Markdown
Now, we will tackle a more complex problem.Let's calculate the survival rate by the *age* group.We can apply the filtering that we just learned to select the group whose age was over 30 and whose age was under 30.In each group, we will calculate the mean of 'Survived' column.
###Code
# filtering
df_over_30yrs = df.loc[df['Age'] > 30]
df_under_30yrs = df.loc[df['Age'] <= 30]
# calculating mean of 'Survived' for each group
mean_over_30 = df_over_30yrs['Survived'].mean()
mean_under_30 = df_under_30yrs['Survived'].mean()
# printing the mean survival rates for each group
# round(number, decimal_points): round a number to a given precision in decimal_points
print("Survival rate - age over 30: ", round(mean_over_30*100, 3), "%")
print("Survival rate - age under 30: ", round(mean_under_30*100, 3), "%")
###Output
Survival rate - age over 30: 40.656 %
Survival rate - age under 30: 40.587 %
###Markdown
There's not much difference between the two groups. GroupbyNow, we will group by *sex* to see if there's any difference between female and male.Here, we use ```groupby``` aggregate function and it will let us group the dataset by that column ('Sex')
###Code
df.groupby(['Sex']).mean()
###Output
_____no_output_____
###Markdown
If we want to sort the aggregate funtion by a column, we can use ```sort_values(by=column_name)```.Let's sort the above aggregate function by "Survived"
###Code
df.groupby(['Sex']).mean().sort_values(by="Survived")
###Output
_____no_output_____
###Markdown
If we are interested in the survival rate of each group, we can use ```[ ]``` after the groupby call to specify which column to display.
###Code
# group by 'Sex' and calculate mean of 'Survived'
df.groupby(['Sex'])['Survived'].mean()
###Output
_____no_output_____
###Markdown
There was a significant difference in the survival rate by *sex*! We can also apply groupby on multiple columns.
###Code
df.groupby(['Sex', 'Pclass']).mean()
###Output
_____no_output_____
###Markdown
In both groups (female and male), the survival rate was a lot higher in Pclass 1 than other Pclass! 7.0 - Now Try ThisThen, some of us might be curious:Would lower Pclass be more expensive or higher Pclass be more expensive?We can answer the question by calculating the mean fares for each class.**Calculate the mean fares for each Pclass!** Let's pause and think!Did you see any correlation between Pclass, Fare, and Survival rate? Briefly describe what you have found here. TakeawaysUsing Dataframe and aggregate functions in Pandas, we can answer any questions that might come up!In the tutorial section, we will apply what we have learned in Pandas and further analyze a new dataset. Practical Exercise About the datasetWhat we will be using in the tutorial is the US Census Demographic Data.The data here were collected by the US Census Burea and it includes data from the entire country.This dataset covers lots of areas: state, county, gender, ethnicity, professional working fields, means of transportation to work, and employment.All of this information is available at a State and County level. There are many questions that we could try to answer with the dataset:- Unemployment by state- Professional fields by state and county- Means of transportation to work by county in CA- ... ObjectiveSince the dataset covers all of the states in the US, we are going to select the top 5 largest states by population. Once we've selected the top five states, we will examine the residents' means of transportation to work at a state and county level.That's our focus in the tutorial, but feel free to play around with it as you'd like.Let's load our data first! Load file
###Code
url = 'https://raw.githubusercontent.com/bitprj/Bitcamp-DataSci/master/Week2-Introduction-to-Open-Data-Importing-Data-and-Basic-Data-Wrangling/data/acs2017_county_data.csv'
df = pd.read_csv(url)
df
###Output
_____no_output_____
###Markdown
8.0 - Now Try ThisCountyId is a unique identifier for each county and state.- Set CountyId as an index.- Update dfHint: Use ```set_index``` and don't forget to update df Basic info about the dataset
###Code
df.head()
###Output
_____no_output_____
###Markdown
Since there are so many columns, the head function doesn't display all columns.Let's use info() as it returns **ALL** of the **column names**, its types, and Non-Null counts.
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3220 entries, 0 to 3219
Data columns (total 37 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 CountyId 3220 non-null int64
1 State 3220 non-null object
2 County 3220 non-null object
3 TotalPop 3220 non-null int64
4 Men 3220 non-null int64
5 Women 3220 non-null int64
6 Hispanic 3220 non-null float64
7 White 3220 non-null float64
8 Black 3220 non-null float64
9 Native 3220 non-null float64
10 Asian 3220 non-null float64
11 Pacific 3220 non-null float64
12 VotingAgeCitizen 3220 non-null int64
13 Income 3220 non-null int64
14 IncomeErr 3220 non-null int64
15 IncomePerCap 3220 non-null int64
16 IncomePerCapErr 3220 non-null int64
17 Poverty 3220 non-null float64
18 ChildPoverty 3219 non-null float64
19 Professional 3220 non-null float64
20 Service 3220 non-null float64
21 Office 3220 non-null float64
22 Construction 3220 non-null float64
23 Production 3220 non-null float64
24 Drive 3220 non-null float64
25 Carpool 3220 non-null float64
26 Transit 3220 non-null float64
27 Walk 3220 non-null float64
28 OtherTransp 3220 non-null float64
29 WorkAtHome 3220 non-null float64
30 MeanCommute 3220 non-null float64
31 Employed 3220 non-null int64
32 PrivateWork 3220 non-null float64
33 PublicWork 3220 non-null float64
34 SelfEmployed 3220 non-null float64
35 FamilyWork 3220 non-null float64
36 Unemployment 3220 non-null float64
dtypes: float64(25), int64(10), object(2)
memory usage: 930.9+ KB
###Markdown
There are 36 columns with 3220 counties.Also, none of the columns have missing rows as every column has 3220 non-null values! That's great!Let's view summary statistics of numeric columns and figure out what we want to get out from this dataset.
###Code
df.describe()
###Output
_____no_output_____
###Markdown
There are 34 numerical columns out of 36 in this dataset.As we saw in the objective, there are lots of things we can do with this dataset.But, let's pick the top 5 states with the largest population and most employees and work from there! Select subsets of dataBefore we dive into the data analysis, let's select the columns we want to work with. As we learned earlier, we can select a specific subset of columns using two methods:1. by index location2. by column names As this dataset has so many columns, let's take a look at all of the columns first.We can call ```columns``` and it will return all of the column names - we can use ```df.info()``` as well.
###Code
df.columns
###Output
_____no_output_____
###Markdown
Since there are so many columns, it's hard to count the index location. So we will use the column names to select subsets of data!As discussed in the objective, our main focus is transportation methods for workers and population. So, we will select the following columns!
###Code
# by column names
df_emp = df[['State', 'County', 'TotalPop', 'Income', 'Employed', 'Drive', 'Carpool', 'Transit', 'Walk', 'OtherTransp', 'WorkAtHome']]
df_emp
###Output
_____no_output_____
###Markdown
Aggregation Total population by stateLet's get total population by state to select the top 5 largest states by population.To calculate this, we need to group by state, and we will need to get ```sum``` of ```TotalPop```. 9.0 - Now Try This- Calculate total population by state from ```df_emp```- Name the dataframe as ```state_pop``` Total population by state -- sortedSince there are so many states, it will be easier to see which states have the largest population if we can sort the dataset.If we use ```sort_values()``` and it's going to sort the data by the aggregation function value.
###Code
state_pop = df_emp.groupby(['State'])['TotalPop'].sum().sort_values(ascending=False)
state_pop
###Output
_____no_output_____
###Markdown
Top 5 statesIf we want to see the top 5 results from any dataframes, we can use ```df.head(n)``` function to display the first n rows.
###Code
state_pop.head(5)
###Output
_____no_output_____
###Markdown
These are the top 5 largest states by population:- California - Texas - Florida - New York - Illinois We are going to use ```loc``` as we want to filter dataset based on criteria.We could use the line below: selecting rows if 'State' is California, Texas, Florida, New York, or Illinois.
###Code
df_emp.loc[(df_emp['State']=='California') | (df_emp['State']=='Texas') | (df_emp['State']=='Florida') | (df_emp['State']== "New York") | (df_emp['State']=='Illinois')]
###Output
_____no_output_____
###Markdown
But, the code above is very lengthy so we will learn a shortcut!We can use ```isin(list_of_values)``` function to see if 'State' is in state_list.The syntax above is very similar to ```'a' in ['a','b','c']``` in Python.
###Code
five_states = df_emp.loc[df_emp['State'].isin(['California','Texas','Florida','New York','Illinois'])]
five_states
###Output
_____no_output_____
###Markdown
Average income by stateNow, we have selected five states to work with, and let's get the average income in each state.
###Code
five_states.groupby(['State'])['Income'].mean().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
California has the highest average income and Florida has the lowest average income amongst these five states. Total number of employees by state 10.0 - Now Try This- Calculate the total number of employees by state- Sort by value- Write down the state with the highest number of employees and the state with the lowest number of employees.Hint: use ```groupby``` Means of transportation to work by StateLet's look at each state's transit mode and see which transit mode is most popular in each state.We will groupby 'State' and we will get the sum of all of the transit modes.
###Code
five_states.groupby(['State'])[['Drive','Carpool','Transit','Walk','OtherTransp','WorkAtHome']].sum()
###Output
_____no_output_____
###Markdown
As California is the largest state by both population and employment, we will work with California dataset only. 11.0 - Now Try This Step 1: Select California state only and save it as ```ca_transit```
###Code
# step 1: Select California state only
###Output
_____no_output_____
###Markdown
Step 2: Calculate ```sum``` of all transit modes by ```county``` in California.
###Code
# Step 2: Calculate the sum of all transit modes by county in California.
###Output
_____no_output_____
|
TF model (2 lags).ipynb
|
###Markdown
Building and fitting NN
###Code
import sys
import pandas as pd
import numpy as np
import scipy.sparse as sparse
from scipy.sparse.linalg import spsolve
import random
import os
import scipy.stats as ss
import scipy
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from catboost import CatBoostClassifier, Pool, sum_models, to_classifier
from sklearn.model_selection import KFold
from sklearn.utils.class_weight import compute_class_weight
import implicit
#df = pd.read_csv("../input/feature-creating-v2/train_3lags_v3.csv", low_memory=False)
#df = df.loc[df['order_count'] == 2]
#df.head()
#threshold = 0.0005
#counts = df['service_title'].value_counts(normalize=True)
#df2 = df.loc[df['service_title'].isin(counts[counts > threshold].index), :]
#sub15 = pd.read_csv("../input/catboost-fitting-smart/submission_15.csv")
#d_134 = df2.loc[df2['service_title'] == 134][:121500]
#d_98 = df2.loc[df2['service_title'] == 98][:121500]
#df2 = df2.loc[df2['service_title'] != 134]
#df2 = df2.loc[df2['service_title'] != 98]
#df2 = pd.concat([df2,d_134], axis=0)
#df2 = pd.concat([df2,d_98], axis=0)
#df2.to_csv('train_3lags_semibalanced.csv', index=False)
df2 = pd.read_csv("../input/irkutsk/train_3lags_semibalanced.csv", low_memory=False)
#df2 = pd.read_csv("../input/irkutsk/train_3lags_v4.csv", low_memory=False)
#df2.drop(['Unnamed: 0'], axis=1, inplace=True)
#X_clustered = pd.read_csv("../input/irkutsk/df_train_clustered_3lags.csv", low_memory=False)
#X_clustered.columns
#sub = pd.read_csv("../input/nn-sub/nn_sub_5fold_2.csv")
#df2['service_title'].value_counts(normalize=True)[:30]
#sub['service_title'] = 1259
#sub.to_csv('check_1259.csv', index=False)
#sub['service_title'].value_counts(normalize=True)[:30]
df2 = df2.sample(frac=1).reset_index(drop=True)
df2 = df2.drop(['service_3', 'service_title_3', 'mfc_3', 'internal_status_3',
'external_status_3', 'order_type_3', 'department_id_3',
'custom_service_id_3', 'service_level_3', 'is_subdep_3', 'is_csid_3',
'month_3', 'week_3', 'year_3', 'dayofweek_3', 'day_part_3', 'person_3',
'sole_3', 'legal_3', 'auto_ping_queue_3'], axis=1)
df2 = df2.drop(['proc_time_3', 'win_count_3'], axis=1)
df2.dropna(inplace=True)
df_test = df2[['service_title']]
df_train = df2.drop(['service_title', 'order_count'], axis=1)
df_train = df_train[['service_1', 'service_title_1', 'mfc_1', 'internal_status_1',
'external_status_1', 'order_type_1', 'department_id_1',
'custom_service_id_1', 'service_level_1', 'is_subdep_1', 'is_csid_1',
'dayofweek_1', 'day_part_1', 'month_1', 'week_1', 'year_1', 'person_1',
'sole_1', 'legal_1', 'auto_ping_queue_1', 'service_2', 'service_title_2', 'mfc_2', 'internal_status_2',
'external_status_2', 'order_type_2', 'department_id_2',
'custom_service_id_2', 'service_level_2', 'is_subdep_2', 'is_csid_2',
'dayofweek_2', 'day_part_2', 'person_2', 'sole_2', 'month_2', 'week_2',
'year_2', 'legal_2', 'auto_ping_queue_2', 'requester_type', 'gender',
'age']]
df_train.reset_index(inplace=True)
df_test.reset_index(inplace=True)
df_train.drop(['index'], axis=1, inplace=True)
df_test.drop(['index'], axis=1, inplace=True)
categorical = ['service_1', 'service_title_1', 'mfc_1',
'internal_status_1', 'external_status_1', 'order_type_1',
'department_id_1', 'custom_service_id_1', 'service_level_1',
'is_subdep_1', 'is_csid_1', 'dayofweek_1', 'day_part_1', 'month_1', 'week_1', 'year_1',
'person_1', 'sole_1', 'legal_1', 'auto_ping_queue_1',
'service_2', 'service_title_2', 'mfc_2', 'internal_status_2',
'external_status_2', 'order_type_2', 'department_id_2',
'custom_service_id_2', 'service_level_2', 'is_subdep_2', 'is_csid_2',
'dayofweek_2', 'day_part_2', 'person_2', 'sole_2', 'month_2', 'week_2', 'year_2',
'legal_2', 'auto_ping_queue_2','service_3',
'service_title_3', 'mfc_3', 'internal_status_3', 'external_status_3',
'order_type_3', 'department_id_3', 'custom_service_id_3',
'service_level_3', 'is_subdep_3', 'is_csid_3', 'month_3', 'week_3', 'year_3',
'dayofweek_3', 'day_part_3', 'person_3', 'sole_3', 'legal_3',
'auto_ping_queue_3',
'requester_type', 'gender']
cat = ['service_1', 'service_title_1', 'mfc_1',
'internal_status_1', 'external_status_1',
'department_id_1', 'custom_service_id_1', 'month_1', 'week_1', 'year_1',
'is_subdep_1', 'is_csid_1', 'dayofweek_1',
'service_2', 'service_title_2', 'mfc_2', 'internal_status_2',
'external_status_2', 'department_id_2', 'month_2', 'week_2', 'year_2',
'custom_service_id_2', 'is_subdep_2', 'is_csid_2',
'dayofweek_2', 'service_3',
'service_title_3', 'mfc_3', 'internal_status_3', 'external_status_3',
'department_id_3', 'custom_service_id_3',
'is_subdep_3', 'is_csid_3', 'month_3', 'week_3', 'year_3',
'dayofweek_3',
'requester_type', 'gender']
X = df_train
y = df_test['service_title']
categorical = ['service_1', 'service_title_1', 'mfc_1',
'internal_status_1', 'external_status_1', 'order_type_1',
'department_id_1', 'custom_service_id_1', 'service_level_1',
'is_subdep_1', 'is_csid_1', 'dayofweek_1', 'day_part_1', 'month_1', 'week_1', 'year_1',
'person_1', 'sole_1', 'legal_1', 'auto_ping_queue_1','service_2', 'service_title_2', 'mfc_2', 'internal_status_2',
'external_status_2', 'order_type_2', 'department_id_2',
'custom_service_id_2', 'service_level_2', 'is_subdep_2', 'is_csid_2',
'dayofweek_2', 'day_part_2', 'person_2', 'sole_2', 'month_2', 'week_2', 'year_2',
'legal_2', 'auto_ping_queue_2',
'requester_type', 'gender']
cat = ['service_1', 'service_title_1', 'mfc_1',
'internal_status_1', 'external_status_1',
'department_id_1', 'custom_service_id_1', 'month_1', 'week_1', 'year_1',
'is_subdep_1', 'is_csid_1', 'dayofweek_1', 'service_2', 'service_title_2', 'mfc_2',
'internal_status_2', 'external_status_2',
'department_id_2', 'custom_service_id_2', 'month_2', 'week_2', 'year_2',
'is_subdep_2', 'is_csid_2', 'dayofweek_2', 'requester_type', 'gender']
X[cat] = X[cat].astype('Int64')
X[cat] = X[cat].astype('object')
#X[cat] = X[cat].astype('Int64')
#X[cat] = X[cat].astype('object')
X[categorical] = X[categorical].fillna('NA')
def reduce_mem_usage(df):
"""
iterate through all the columns of a dataframe and
modify the data type to reduce memory usage.
"""
start_mem = df.memory_usage().sum() / 1024**2
print(('Memory usage of dataframe is {:.2f}'
'MB').format(start_mem))
for col in df.columns:
col_type = df[col].dtype
if col_type != object:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max <\
np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max <\
np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max <\
np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max <\
np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max <\
np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max <\
np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
else:
df[col] = df[col].astype('category')
end_mem = df.memory_usage().sum() / 1024**2
print(('Memory usage after optimization is: {:.2f}'
'MB').format(end_mem))
print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem)
/ start_mem))
return df
X = reduce_mem_usage(X)
X['person_1'] = X['person_1'].astype('int32')
X['sole_1'] = X['sole_1'].astype('int32')
X['legal_1'] = X['legal_1'].astype('int32')
X['auto_ping_queue_1'] = X['auto_ping_queue_1'].astype('int32')
lag_2 = ['service_2', 'service_title_2', 'mfc_2', 'internal_status_2',
'external_status_2', 'order_type_2', 'department_id_2',
'custom_service_id_2', 'service_level_2', 'is_subdep_2', 'is_csid_2',
'dayofweek_2', 'day_part_2', 'person_2', 'sole_2', 'month_2', 'week_2', 'year_2',
'legal_2', 'auto_ping_queue_2']
X[lag_2] = X[lag_2].astype('str')
from sklearn import preprocessing
labeling = []
for col in X[categorical].columns:
d = pd.DataFrame()
d[col] = X[col].unique()
le = preprocessing.LabelEncoder()
le.fit(d[col])
d[col+'_l'] = le.transform(d[col])
d = d.sort_values(by=[col+'_l']).reset_index(drop=True)
labeling.append(d)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X[['age']])
X[['age']] = scaler.transform(X[['age']])
d = pd.DataFrame()
d['service_title'] = y.unique()
le = preprocessing.LabelEncoder()
le.fit(d['service_title'])
d['service_title'+'_l'] = le.transform(d['service_title'])
d = d.sort_values(by=['service_title'+'_l']).reset_index(drop=True)
labeling_y = d
i = 0
for col in X[categorical].columns:
X[col] = X[col].map(labeling[i].set_index(col).to_dict()[col+'_l'])
i += 1
y = y.map(labeling_y.set_index('service_title').to_dict()['service_title'+'_l'])
#s_w = compute_class_weight(class_weight='balanced', classes=labeling_y['service_title_l'], y=y)
weights_l = labeling_y[:]
weights_l['weights'] = 1
weights = pd.DataFrame(y)
weights = pd.merge(weights, weights_l, how='left', left_on='service_title', right_on='service_title_l')
#weights = np.array(weights['weights'])
weights['service_title_y'].value_counts(normalize=True)[:30]
#weights['weights'].loc[weights['service_title_y'] == 4] = 0.4
#weights['weights'].loc[weights['service_title_y'] == 603] = 0.6
#weights['weights'].loc[weights['service_title_y'] == 98] = 0.78
#weights['weights'].loc[weights['service_title_y'] == 134] = 0.35
#weights['weights'].loc[weights['service_title_y'] == 1259] = 0.01
weights = np.array(weights['weights'])
for col in X.columns:
print(col, len(X[col].unique()))
import tensorflow as tf
import tensorflow.keras.backend as K
from sklearn.model_selection import StratifiedKFold
# Detect hardware, return appropriate distribution strategy
try:
# TPU detection. No parameters necessary if TPU_NAME environment variable is
# set: this is always the case on Kaggle.
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print('Running on TPU ', tpu.master())
except ValueError:
tpu = None
if tpu:
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
else:
# Default distribution strategy in Tensorflow. Works on CPU and single GPU.
strategy = tf.distribute.get_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
BATCH_SIZE = 32 * strategy.num_replicas_in_sync
LEARNING_RATE = 1e-3 * strategy.num_replicas_in_sync
EPOCHS = 5
len(X['service_1'].unique())
def generate_dataset(idxT, idxV):
trn_weights = weights[idxT, ]
val_weights = weights[idxV, ]
#n samples
trn_input_ids = np.array(X.index)[idxT,]
# Trainset
trn_input_service_1 = np.array(X['service_1'])[idxT,]
trn_input_service_title_1 = np.array(X['service_title_1'])[idxT,]
trn_input_mfc_1 = np.array(X['mfc_1'])[idxT,]
trn_input_internal_status_1 = np.array(X['internal_status_1'])[idxT,]
trn_input_external_status_1 = np.array(X['external_status_1'])[idxT,]
trn_input_order_type_1 = np.array(X['order_type_1'])[idxT,]
trn_input_department_id_1 = np.array(X['department_id_1'])[idxT,]
trn_input_custom_service_id_1 = np.array(X['custom_service_id_1'])[idxT,]
trn_input_service_level_1 = np.array(X['service_level_1'])[idxT,]
trn_input_is_subdep_1 = np.array(X['is_subdep_1'])[idxT,]
trn_input_is_csid_1 = np.array(X['is_csid_1'])[idxT,]
trn_input_dayofweek_1 = np.array(X['dayofweek_1'])[idxT,]
trn_input_day_part_1 = np.array(X['day_part_1'])[idxT,]
trn_input_month_1 = np.array(X['month_1'])[idxT,]
trn_input_week_1 = np.array(X['week_1'])[idxT,]
trn_input_year_1 = np.array(X['year_1'])[idxT,]
trn_input_person_1 = np.array(X['person_1'])[idxT,]
trn_input_sole_1 = np.array(X['sole_1'])[idxT,]
trn_input_legal_1 = np.array(X['legal_1'])[idxT,]
trn_input_auto_ping_queue_1 = np.array(X['auto_ping_queue_1'])[idxT,]
trn_input_service_2 = np.array(X['service_2'])[idxT,]
trn_input_service_title_2 = np.array(X['service_title_2'])[idxT,]
trn_input_mfc_2 = np.array(X['mfc_2'])[idxT,]
trn_input_internal_status_2 = np.array(X['internal_status_2'])[idxT,]
trn_input_external_status_2 = np.array(X['external_status_2'])[idxT,]
trn_input_order_type_2 = np.array(X['order_type_2'])[idxT,]
trn_input_department_id_2 = np.array(X['department_id_2'])[idxT,]
trn_input_custom_service_id_2 = np.array(X['custom_service_id_2'])[idxT,]
trn_input_service_level_2 = np.array(X['service_level_2'])[idxT,]
trn_input_is_subdep_2 = np.array(X['is_subdep_2'])[idxT,]
trn_input_is_csid_2 = np.array(X['is_csid_2'])[idxT,]
trn_input_dayofweek_2 = np.array(X['dayofweek_2'])[idxT,]
trn_input_day_part_2 = np.array(X['day_part_2'])[idxT,]
trn_input_month_2 = np.array(X['month_2'])[idxT,]
trn_input_week_2 = np.array(X['week_2'])[idxT,]
trn_input_year_2 = np.array(X['year_2'])[idxT,]
trn_input_person_2 = np.array(X['person_2'])[idxT,]
trn_input_sole_2 = np.array(X['sole_2'])[idxT,]
trn_input_legal_2 = np.array(X['legal_2'])[idxT,]
trn_input_auto_ping_queue_2 = np.array(X['auto_ping_queue_2'])[idxT,]
trn_input_requester_type = np.array(X['requester_type'])[idxT,]
trn_input_gender = np.array(X['gender'])[idxT,]
trn_input_age = np.array(X['age'])[idxT,]
trn_service_title = np.array(pd.get_dummies(y))[idxT,].astype('int32')
# Validation set
val_input_service_1 = np.array(X['service_1'])[idxV,]
val_input_service_title_1 = np.array(X['service_title_1'])[idxV,]
val_input_mfc_1 = np.array(X['mfc_1'])[idxV,]
val_input_internal_status_1 = np.array(X['internal_status_1'])[idxV,]
val_input_external_status_1 = np.array(X['external_status_1'])[idxV,]
val_input_order_type_1 = np.array(X['order_type_1'])[idxV,]
val_input_department_id_1 = np.array(X['department_id_1'])[idxV,]
val_input_custom_service_id_1 = np.array(X['custom_service_id_1'])[idxV,]
val_input_service_level_1 = np.array(X['service_level_1'])[idxV,]
val_input_is_subdep_1 = np.array(X['is_subdep_1'])[idxV,]
val_input_is_csid_1 = np.array(X['is_csid_1'])[idxV,]
val_input_dayofweek_1 = np.array(X['dayofweek_1'])[idxV,]
val_input_day_part_1 = np.array(X['day_part_1'])[idxV,]
val_input_month_1 = np.array(X['month_1'])[idxV,]
val_input_week_1 = np.array(X['week_1'])[idxV,]
val_input_year_1 = np.array(X['year_1'])[idxV,]
val_input_person_1 = np.array(X['person_1'])[idxV,]
val_input_sole_1 = np.array(X['sole_1'])[idxV,]
val_input_legal_1 = np.array(X['legal_1'])[idxV,]
val_input_auto_ping_queue_1 = np.array(X['auto_ping_queue_1'])[idxV,]
val_input_service_2 = np.array(X['service_2'])[idxV,]
val_input_service_title_2 = np.array(X['service_title_2'])[idxV,]
val_input_mfc_2 = np.array(X['mfc_2'])[idxV,]
val_input_internal_status_2 = np.array(X['internal_status_2'])[idxV,]
val_input_external_status_2 = np.array(X['external_status_2'])[idxV,]
val_input_order_type_2 = np.array(X['order_type_2'])[idxV,]
val_input_department_id_2 = np.array(X['department_id_2'])[idxV,]
val_input_custom_service_id_2 = np.array(X['custom_service_id_2'])[idxV,]
val_input_service_level_2 = np.array(X['service_level_2'])[idxV,]
val_input_is_subdep_2 = np.array(X['is_subdep_2'])[idxV,]
val_input_is_csid_2 = np.array(X['is_csid_2'])[idxV,]
val_input_dayofweek_2 = np.array(X['dayofweek_2'])[idxV,]
val_input_day_part_2 = np.array(X['day_part_2'])[idxV,]
val_input_month_2 = np.array(X['month_2'])[idxV,]
val_input_week_2 = np.array(X['week_2'])[idxV,]
val_input_year_2 = np.array(X['year_2'])[idxV,]
val_input_person_2 = np.array(X['person_2'])[idxV,]
val_input_sole_2 = np.array(X['sole_2'])[idxV,]
val_input_legal_2 = np.array(X['legal_2'])[idxV,]
val_input_auto_ping_queue_2 = np.array(X['auto_ping_queue_2'])[idxV,]
val_input_requester_type = np.array(X['requester_type'])[idxV,]
val_input_gender = np.array(X['gender'])[idxV,]
val_input_age = np.array(X['age'])[idxV,]
val_service_title = np.array(pd.get_dummies(y))[idxV,].astype('int32')
# Generating tf.data object
train_dataset = (
tf.data.Dataset
.from_tensor_slices(({'service_1':trn_input_service_1,
'service_title_1': trn_input_service_title_1,
'mfc_1': trn_input_mfc_1,
'internal_status_1': trn_input_internal_status_1,
'external_status_1': trn_input_external_status_1,
'order_type_1': trn_input_order_type_1,
'department_id_1': trn_input_department_id_1,
'custom_service_id_1': trn_input_custom_service_id_1,
'service_level_1': trn_input_service_level_1,
'is_subdep_1': trn_input_is_subdep_1,
'is_csid_1': trn_input_is_csid_1,
'dayofweek_1': trn_input_dayofweek_1,
'day_part_1': trn_input_day_part_1,
'month_1': trn_input_month_1,
'week_1': trn_input_week_1,
'year_1': trn_input_year_1,
'person_1': trn_input_person_1,
'sole_1': trn_input_sole_1,
'legal_1': trn_input_legal_1,
'auto_ping_queue_1': trn_input_auto_ping_queue_1,
'service_2':trn_input_service_2,
'service_title_2': trn_input_service_title_2,
'mfc_2': trn_input_mfc_2,
'internal_status_2': trn_input_internal_status_2,
'external_status_2': trn_input_external_status_2,
'order_type_2': trn_input_order_type_2,
'department_id_2': trn_input_department_id_2,
'custom_service_id_2': trn_input_custom_service_id_2,
'service_level_2': trn_input_service_level_2,
'is_subdep_2': trn_input_is_subdep_2,
'is_csid_2': trn_input_is_csid_2,
'dayofweek_2': trn_input_dayofweek_2,
'day_part_2': trn_input_day_part_2,
'month_2': trn_input_month_2,
'week_2': trn_input_week_2,
'year_2': trn_input_year_2,
'person_2': trn_input_person_2,
'sole_2': trn_input_sole_2,
'legal_2': trn_input_legal_2,
'auto_ping_queue_2': trn_input_auto_ping_queue_2,
'requester_type': trn_input_requester_type,
'gender': trn_input_gender,
'age': trn_input_age},
{'service_title': trn_service_title}, trn_weights))
.shuffle(2048)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
valid_dataset = (
tf.data.Dataset
.from_tensor_slices(({'service_1':val_input_service_1,
'service_title_1': val_input_service_title_1,
'mfc_1': val_input_mfc_1,
'internal_status_1': val_input_internal_status_1,
'external_status_1': val_input_external_status_1,
'order_type_1': val_input_order_type_1,
'department_id_1': val_input_department_id_1,
'custom_service_id_1': val_input_custom_service_id_1,
'service_level_1': val_input_service_level_1,
'is_subdep_1': val_input_is_subdep_1,
'is_csid_1': val_input_is_csid_1,
'dayofweek_1': val_input_dayofweek_1,
'day_part_1': val_input_day_part_1,
'month_1': val_input_month_1,
'week_1': val_input_week_1,
'year_1': val_input_year_1,
'person_1': val_input_person_1,
'sole_1': val_input_sole_1,
'legal_1': val_input_legal_1,
'auto_ping_queue_1': val_input_auto_ping_queue_1,
'service_2':val_input_service_2,
'service_title_2': val_input_service_title_2,
'mfc_2': val_input_mfc_2,
'internal_status_2': val_input_internal_status_2,
'external_status_2': val_input_external_status_2,
'order_type_2': val_input_order_type_2,
'department_id_2': val_input_department_id_2,
'custom_service_id_2': val_input_custom_service_id_2,
'service_level_2': val_input_service_level_2,
'is_subdep_2': val_input_is_subdep_2,
'is_csid_2': val_input_is_csid_2,
'dayofweek_2': val_input_dayofweek_2,
'day_part_2': val_input_day_part_2,
'month_2': val_input_month_2,
'week_2': val_input_week_2,
'year_2': val_input_year_2,
'person_2': val_input_person_2,
'sole_2': val_input_sole_2,
'legal_2': val_input_legal_2,
'auto_ping_queue_2': val_input_auto_ping_queue_2,
'requester_type': val_input_requester_type,
'gender': val_input_gender,
'age': val_input_age},
{'service_title': val_service_title}, val_weights))
.batch(BATCH_SIZE)
.cache()
.prefetch(AUTO)
)
return trn_input_ids.shape[0]//BATCH_SIZE, train_dataset, valid_dataset
def scheduler(epoch):
return LEARNING_RATE * 0.2**epoch
def build_model():
service_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='service_1')
service_title_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='service_title_1')
mfc_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='mfc_1')
internal_status_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='internal_status_1')
external_status_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='external_status_1')
order_type_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='order_type_1')
department_id_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='department_id_1')
custom_service_id_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='custom_service_id_1')
service_level_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='service_level_1')
is_subdep_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='is_subdep_1')
is_csid_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='is_csid_1')
dayofweek_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='dayofweek_1')
day_part_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='day_part_1')
month_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='month_1')
week_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='week_1')
year_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='year_1')
person_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='person_1')
sole_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='sole_1')
legal_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='legal_1')
auto_ping_queue_1 = tf.keras.layers.Input((1,), dtype=tf.int32, name='auto_ping_queue_1')
service_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='service_2')
service_title_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='service_title_2')
mfc_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='mfc_2')
internal_status_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='internal_status_2')
external_status_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='external_status_2')
order_type_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='order_type_2')
department_id_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='department_id_2')
custom_service_id_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='custom_service_id_2')
service_level_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='service_level_2')
is_subdep_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='is_subdep_2')
is_csid_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='is_csid_2')
dayofweek_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='dayofweek_2')
day_part_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='day_part_2')
month_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='month_2')
week_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='week_2')
year_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='year_2')
person_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='person_2')
sole_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='sole_2')
legal_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='legal_2')
auto_ping_queue_2 = tf.keras.layers.Input((1,), dtype=tf.int32, name='auto_ping_queue_2')
requester_type = tf.keras.layers.Input((1,), dtype=tf.int32, name='requester_type')
gender = tf.keras.layers.Input((1,), dtype=tf.int32, name='gender')
age = tf.keras.layers.Input((1,), dtype=tf.float64, name='age')
service_1_embedding = tf.keras.layers.Embedding(len(X['service_1'].unique()), 11, input_length=1, name='service_1_embedding')(service_1)
service_title_1_embedding = tf.keras.layers.Embedding(len(X['service_title_1'].unique()), 50, input_length=1, name='service_title_1_embedding')(service_title_1)
mfc_1_embedding = tf.keras.layers.Embedding(len(X['mfc_1'].unique()), 50, input_length=1, name='mfc_1_embedding')(mfc_1)
internal_status_1_embedding = tf.keras.layers.Embedding(len(X['internal_status_1'].unique()), 5, input_length=1, name='internal_status_1_embedding')(internal_status_1)
external_status_1_embedding = tf.keras.layers.Embedding(len(X['external_status_1'].unique()), 14, input_length=1, name='external_status_1_embedding')(external_status_1)
order_type_1_embedding = tf.keras.layers.Embedding(len(X['order_type_1'].unique()), 2, input_length=1, name='order_type_1_embedding')(order_type_1)
department_id_1_embedding = tf.keras.layers.Embedding(len(X['department_id_1'].unique()), 46, input_length=1, name='department_id_1_embedding')(department_id_1)
custom_service_id_1_embedding = tf.keras.layers.Embedding(len(X['custom_service_id_1'].unique()), 26, input_length=1, name='custom_service_id_1_embedding')(custom_service_id_1)
service_level_1_embedding = tf.keras.layers.Embedding(len(X['service_level_1'].unique()), 3, input_length=1, name='service_level_1_embedding')(service_level_1)
is_subdep_1_embedding = tf.keras.layers.Embedding(len(X['is_subdep_1'].unique()), 1, input_length=1, name='is_subdep_1_embedding')(is_subdep_1)
is_csid_1_embedding = tf.keras.layers.Embedding(len(X['is_csid_1'].unique()), 1, input_length=1, name='is_csid_1_embedding')(is_csid_1)
dayofweek_1_embedding = tf.keras.layers.Embedding(len(X['dayofweek_1'].unique()), 4, input_length=1, name='dayofweek_1_embedding')(dayofweek_1)
day_part_1_embedding = tf.keras.layers.Embedding(len(X['day_part_1'].unique()), 2, input_length=1, name='day_part_1_embedding')(day_part_1)
month_1_embedding = tf.keras.layers.Embedding(len(X['month_1'].unique()), 6, input_length=1, name='month_1_embedding')(month_1)
week_1_embedding = tf.keras.layers.Embedding(len(X['week_1'].unique()), 26, input_length=1, name='week_1_embedding')(week_1)
year_1_embedding = tf.keras.layers.Embedding(len(X['year_1'].unique()), 1, input_length=1, name='year_1_embedding')(year_1)
person_1_embedding = tf.keras.layers.Embedding(len(X['person_1'].unique()), 1, input_length=1, name='person_1_embedding')(person_1)
sole_1_embedding = tf.keras.layers.Embedding(len(X['sole_1'].unique()), 1, input_length=1, name='sole_1_embedding')(sole_1)
legal_1_embedding = tf.keras.layers.Embedding(len(X['legal_1'].unique()), 1, input_length=1, name='legal_1_embedding')(legal_1)
auto_ping_queue_1_embedding = tf.keras.layers.Embedding(len(X['auto_ping_queue_1'].unique()), 1, input_length=1, name='auto_ping_queue_1_embedding')(auto_ping_queue_1)
service_2_embedding = tf.keras.layers.Embedding(len(X['service_2'].unique()), 11, input_length=1, name='service_2_embedding')(service_2)
service_title_2_embedding = tf.keras.layers.Embedding(len(X['service_title_2'].unique()), 50, input_length=1, name='service_title_2_embedding')(service_title_2)
mfc_2_embedding = tf.keras.layers.Embedding(len(X['mfc_2'].unique()), 50, input_length=1, name='mfc_2_embedding')(mfc_2)
internal_status_2_embedding = tf.keras.layers.Embedding(len(X['internal_status_2'].unique()), 5, input_length=1, name='internal_status_2_embedding')(internal_status_2)
external_status_2_embedding = tf.keras.layers.Embedding(len(X['external_status_2'].unique()), 14, input_length=1, name='external_status_2_embedding')(external_status_2)
order_type_2_embedding = tf.keras.layers.Embedding(len(X['order_type_2'].unique()), 2, input_length=1, name='order_type_2_embedding')(order_type_2)
department_id_2_embedding = tf.keras.layers.Embedding(len(X['department_id_2'].unique()), 46, input_length=1, name='department_id_2_embedding')(department_id_2)
custom_service_id_2_embedding = tf.keras.layers.Embedding(len(X['custom_service_id_2'].unique()), 26, input_length=1, name='custom_service_id_2_embedding')(custom_service_id_2)
service_level_2_embedding = tf.keras.layers.Embedding(len(X['service_level_2'].unique()), 3, input_length=1, name='service_level_2_embedding')(service_level_2)
is_subdep_2_embedding = tf.keras.layers.Embedding(len(X['is_subdep_2'].unique()), 1, input_length=1, name='is_subdep_2_embedding')(is_subdep_2)
is_csid_2_embedding = tf.keras.layers.Embedding(len(X['is_csid_2'].unique()), 1, input_length=1, name='is_csid_2_embedding')(is_csid_2)
dayofweek_2_embedding = tf.keras.layers.Embedding(len(X['dayofweek_2'].unique()), 4, input_length=1, name='dayofweek_2_embedding')(dayofweek_2)
day_part_2_embedding = tf.keras.layers.Embedding(len(X['day_part_2'].unique()), 2, input_length=1, name='day_part_2_embedding')(day_part_2)
month_2_embedding = tf.keras.layers.Embedding(len(X['month_2'].unique()), 6, input_length=1, name='month_2_embedding')(month_2)
week_2_embedding = tf.keras.layers.Embedding(len(X['week_2'].unique()), 26, input_length=1, name='week_2_embedding')(week_2)
year_2_embedding = tf.keras.layers.Embedding(len(X['year_2'].unique()), 1, input_length=1, name='year_2_embedding')(year_2)
person_2_embedding = tf.keras.layers.Embedding(len(X['person_2'].unique()), 1, input_length=1, name='person_2_embedding')(person_2)
sole_2_embedding = tf.keras.layers.Embedding(len(X['sole_2'].unique()), 1, input_length=1, name='sole_2_embedding')(sole_2)
legal_2_embedding = tf.keras.layers.Embedding(len(X['legal_2'].unique()), 1, input_length=1, name='legal_2_embedding')(legal_2)
auto_ping_queue_2_embedding = tf.keras.layers.Embedding(len(X['auto_ping_queue_2'].unique()), 1, input_length=1, name='auto_ping_queue_2_embedding')(auto_ping_queue_2)
requester_type_embedding = tf.keras.layers.Embedding(len(X['requester_type'].unique()), 2, input_length=1, name='requester_type_embedding')(requester_type)
gender_embedding = tf.keras.layers.Embedding(len(X['gender'].unique()), 1, input_length=1, name='gender_embedding')(gender)
age_reshape = tf.keras.layers.Reshape((1, 1), name='age_reshape')(age)
concatenated = tf.keras.layers.Concatenate()([service_1_embedding,
service_title_1_embedding,
mfc_1_embedding,
internal_status_1_embedding,
external_status_1_embedding,
order_type_1_embedding,
department_id_1_embedding,
custom_service_id_1_embedding,
service_level_1_embedding,
is_subdep_1_embedding,
is_csid_1_embedding,
dayofweek_1_embedding,
day_part_1_embedding,
month_1_embedding,
year_1_embedding,
person_1_embedding,
sole_1_embedding,
legal_1_embedding,
auto_ping_queue_1_embedding,
service_2_embedding,
service_title_2_embedding,
mfc_2_embedding,
internal_status_2_embedding,
external_status_2_embedding,
order_type_2_embedding,
department_id_2_embedding,
custom_service_id_2_embedding,
service_level_2_embedding,
is_subdep_2_embedding,
is_csid_2_embedding,
dayofweek_2_embedding,
day_part_2_embedding,
month_2_embedding,
year_2_embedding,
person_2_embedding,
sole_2_embedding,
legal_2_embedding,
auto_ping_queue_2_embedding,
requester_type_embedding,
gender_embedding,
age_reshape])
#out = tf.keras.layers.Flatten()(concatenated)
#out = tf.keras.layers.Dense(512, activation='relu')(out)
#out = tf.keras.layers.Dense(256, activation='relu')(out)
#out = tf.keras.layers.Dense(256, activation='relu')(out)
#out = tf.keras.layers.Conv1D(128, 2, padding='same')(concatenated)
#out = tf.keras.layers.LeakyReLU()(out)
#out = tf.keras.layers.Conv1D(64, 2, padding='same')(out)
#out = tf.keras.layers.Flatten()(out)
conv_0 = tf.keras.layers.Conv1D(256, 3, padding='same', activation='relu')(concatenated)
conv_1 = tf.keras.layers.Conv1D(256, 2, padding='same', activation='relu')(concatenated)
conv_2 = tf.keras.layers.Conv1D(256, 6, padding='same', activation='relu')(concatenated)
conv_0 = tf.keras.layers.Conv1D(128, 3, padding='same', activation='relu')(conv_0)
conv_1 = tf.keras.layers.Conv1D(128, 2, padding='same', activation='relu')(conv_1)
conv_2 = tf.keras.layers.Conv1D(128, 6, padding='same', activation='relu')(conv_2)
concatenated_tensor = tf.keras.layers.Concatenate(axis=1)([conv_0, conv_1, conv_2])
out = tf.keras.layers.Flatten()(concatenated_tensor)
out = tf.keras.layers.Dropout(0.1)(out)
out = tf.keras.layers.Dense(len(y.unique()), activation='softmax', name='service_title')(out)
model = tf.keras.models.Model(inputs=[service_1,
service_title_1,
mfc_1,
internal_status_1,
external_status_1,
order_type_1,
department_id_1,
custom_service_id_1,
service_level_1,
is_subdep_1,
is_csid_1,
dayofweek_1,
day_part_1,
month_1,
week_1,
year_1,
person_1,
sole_1,
legal_1,
auto_ping_queue_1,
service_2,
service_title_2,
mfc_2,
internal_status_2,
external_status_2,
order_type_2,
department_id_2,
custom_service_id_2,
service_level_2,
is_subdep_2,
is_csid_2,
dayofweek_2,
day_part_2,
month_2,
week_2,
year_2,
person_2,
sole_2,
legal_2,
auto_ping_queue_2,
requester_type,
gender,
age],
outputs=out)
optimizer = tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE)
model.compile(loss = 'categorical_crossentropy', optimizer=optimizer, metrics=["accuracy"])
return model
n_splits = 5
VER='v11'
DISPLAY=1 # USE display=1 FOR INTERACTIVE
service_title = np.zeros((X.shape[0], 1))
skf = StratifiedKFold(n_splits=n_splits,shuffle=True,random_state=777)
for fold, (idxT, idxV) in enumerate(skf.split(X,y)):
print('#'*25)
print('### FOLD %i'%(fold+1))
print('#'*25)
# Cleaning everything
K.clear_session()
tf.tpu.experimental.initialize_tpu_system(tpu)
# Building model
with strategy.scope():
model = build_model()
n_steps, trn_dataset, val_dataset = generate_dataset(idxT, idxV)
reduce_lr = tf.keras.callbacks.LearningRateScheduler(scheduler)
sv = tf.keras.callbacks.ModelCheckpoint(
'%s-learnedemb-%i.h5'%(VER,fold), monitor='val_accuracy', verbose=1, save_best_only=True,
save_weights_only=True, mode='auto', save_freq='epoch')
hist = model.fit(trn_dataset,
epochs=EPOCHS,
verbose=DISPLAY,
callbacks=[sv, reduce_lr],
validation_data=val_dataset)
###Output
#########################
### FOLD 1
#########################
Epoch 1/5
3403/3403 [==============================] - ETA: 0s - loss: 2.2985 - accuracy: 0.4519
Epoch 00001: val_accuracy improved from -inf to 0.46283, saving model to v11-learnedemb-0.h5
3403/3403 [==============================] - 239s 70ms/step - loss: 2.2985 - accuracy: 0.4519 - val_loss: 2.2256 - val_accuracy: 0.4628 - lr: 0.0080
Epoch 2/5
3402/3403 [============================>.] - ETA: 0s - loss: 2.0750 - accuracy: 0.4844
Epoch 00003: val_accuracy improved from 0.47872 to 0.48346, saving model to v11-learnedemb-1.h5
3403/3403 [==============================] - 232s 68ms/step - loss: 2.0750 - accuracy: 0.4844 - val_loss: 2.0926 - val_accuracy: 0.4835 - lr: 3.2000e-04
Epoch 4/5
3402/3403 [============================>.] - ETA: 0s - loss: 2.0535 - accuracy: 0.4878
Epoch 00005: val_accuracy improved from 0.48376 to 0.48393, saving model to v11-learnedemb-1.h5
3403/3403 [==============================] - 232s 68ms/step - loss: 2.0535 - accuracy: 0.4878 - val_loss: 2.0882 - val_accuracy: 0.4839 - lr: 1.2800e-05
#########################
### FOLD 3
#########################
Epoch 1/5
3403/3403 [==============================] - ETA: 0s - loss: 2.2973 - accuracy: 0.4526
Epoch 00001: val_accuracy improved from -inf to 0.45908, saving model to v11-learnedemb-2.h5
3403/3403 [==============================] - 242s 71ms/step - loss: 2.2973 - accuracy: 0.4526 - val_loss: 2.2283 - val_accuracy: 0.4591 - lr: 0.0080
Epoch 2/5
3403/3403 [==============================] - ETA: 0s - loss: 2.2952 - accuracy: 0.4527
Epoch 00001: val_accuracy improved from -inf to 0.46206, saving model to v11-learnedemb-3.h5
3403/3403 [==============================] - 244s 72ms/step - loss: 2.2952 - accuracy: 0.4527 - val_loss: 2.2186 - val_accuracy: 0.4621 - lr: 0.0080
Epoch 2/5
315/3403 [=>............................] - ETA: 2:49 - loss: 2.2006 - accuracy: 0.4658
Epoch 00003: val_accuracy improved from 0.47851 to 0.48187, saving model to v11-learnedemb-3.h5
3403/3403 [==============================] - 235s 69ms/step - loss: 2.0755 - accuracy: 0.4843 - val_loss: 2.0886 - val_accuracy: 0.4819 - lr: 3.2000e-04
Epoch 4/5
498/3403 [===>..........................] - ETA: 2:40 - loss: 2.0858 - accuracy: 0.4837
Epoch 00005: val_accuracy did not improve from 0.48271
3403/3403 [==============================] - 235s 69ms/step - loss: 2.0548 - accuracy: 0.4873 - val_loss: 2.0848 - val_accuracy: 0.4826 - lr: 1.2800e-05
#########################
### FOLD 5
#########################
Epoch 1/5
2497/3403 [=====================>........] - ETA: 49s - loss: 2.3075 - accuracy: 0.4510
|
ch00python/015variables.ipynb
|
###Markdown
Variables Variable Assignment When we generate a result, the answer is displayed, but not kept anywhere.
###Code
2*3
###Output
_____no_output_____
###Markdown
If we want to get back to that result, we have to store it. We put it in a box, with a name on the box. This is a **variable**.
###Code
six = 2*3
print(six)
###Output
6
###Markdown
If we look for a variable that hasn't ever been defined, we get an error.
###Code
print(seven)
###Output
_____no_output_____
###Markdown
That's **not** the same as an empty box, well labeled:
###Code
nothing = None
print(nothing)
type(None)
###Output
_____no_output_____
###Markdown
(None is the special python value for a no-value variable.) *Supplementary Materials*: There's more on variables at http://swcarpentry.github.io/python-novice-inflammation/01-numpy/index.html Anywhere we could put a raw number, we can put a variable label, and that works fine:
###Code
print(5*six)
scary = six*six*six
print(scary)
###Output
216
###Markdown
Reassignment and multiple labels But here's the real scary thing: it seems like we can put something else in that box:
###Code
scary = 25
print(scary)
###Output
25
###Markdown
Note that **the data that was there before has been lost**. No labels refer to it any more - so it has been "Garbage Collected"! We might imagine something pulled out of the box, and thrown on the floor, to make way for the next occupant. In fact, though, it is the **label** that has moved. We can see this because we have more than one label refering to the same box:
###Code
name = "James"
nom = name
print(nom)
print(name)
###Output
James
###Markdown
And we can move just one of those labels:
###Code
nom = "Hetherington"
print(name)
print(nom)
###Output
Hetherington
###Markdown
So we can now develop a better understanding of our labels and boxes: each box is a piece of space (an *address*) in computer memory.Each label (variable) is a reference to such a place. When the number of labels on a box ("variables referencing an address") gets down to zero, then the data in the box cannot be found any more. After a while, the language's "Garbage collector" will wander by, notice a box with no labels, and throw the data away, **making that boxavailable for more data**. Old fashioned languages like C and Fortran don't have Garbage collectors. So a memory address with no references to itstill takes up memory, and the computer can more easily run out. So when I write:
###Code
name = "Jim"
###Output
_____no_output_____
###Markdown
The following things happen: 1. A new text **object** is created, and an address in memory is found for it.1. The variable "name" is moved to refer to that address.1. The old address, containing "James", now has no labels.1. The garbage collector frees the memory at the old address. **Supplementary materials**: There's an online python tutor which is great for visualising memory and references. Try the [scenario we just looked at](http://www.pythontutor.com/visualize.htmlcode=name+%3D+%22James%22%0Anom+%3D+name%0Aprint+nom%0Aprint+name%0Anom+%3D+%22Hetherington%22%0Aprint+nom%0Aprint+name%0Aname%3D+%22Jim%22%0Aprint+nom%0Aprint+name&mode=display&origin=opt-frontend.js&cumulative=false&heapPrimitives=true&textReferences=false&py=2&rawInputLstJSON=%5B%5D&curInstr=0)Labels are contained in groups called "frames": our frame contains two labels, 'nom' and 'name'. Objects and types An object, like `name`, has a type. In the online python tutor example, we see that the objects have type "str".`str` means a text object: Programmers call these 'strings'.
###Code
type(name)
###Output
_____no_output_____
###Markdown
Depending on its type, an object can have different *properties*: data fields Inside the object. Consider a Python complex number for example:
###Code
z = 3+1j
###Output
_____no_output_____
###Markdown
We can see what properties and methods an object has available using the `dir` function:
###Code
dir(z)
###Output
_____no_output_____
###Markdown
You can see that there are several methods whose name starts and ends with `__` (e.g. `__init__`): these are special methods that Python uses internally, and we will discuss some of them later on in this course. The others (in this case, `conjugate`, `img` and `real`) are the methods and fields through which we can interact with this object.
###Code
type(z)
z.real
z.imag
###Output
_____no_output_____
###Markdown
A property of an object is accessed with a dot. The jargon is that the "dot operator" is used to obtain a property of an object. When we try to access a property that doesn't exist, we get an error:
###Code
z.wrong
###Output
_____no_output_____
###Markdown
Reading error messages. It's important, when learning to program, to develop an ability to read an error message and find, from in amongstall the confusing noise, the bit of the error message which tells you what to change! We don't yet know what is meant by `AttributeError`, or "Traceback".
###Code
z2 = 5-6j
print("Gets to here")
print(z.wrong)
print("Didn't get to here")
###Output
Gets to here
###Markdown
But in the above, we can see that the error happens on the **third** line of our code cell. We can also see that the error message: > 'complex' object has no attribute 'wrong' ...tells us something important. Even if we don't understand the rest, this is useful for debugging! Variables and the notebook kernel When I type code in the notebook, the objects live in memory between cells.
###Code
number = 0
print(number)
###Output
0
###Markdown
If I change a variable:
###Code
number = number + 1
print(number)
###Output
1
###Markdown
Variables Variable Assignment When we generate a result, the answer is displayed, but not kept anywhere.
###Code
2*3
###Output
_____no_output_____
###Markdown
If we want to get back to that result, we have to store it. We put it in a box, with a name on the box. This is a **variable**.
###Code
six = 2*3
print(six)
###Output
6
###Markdown
If we look for a variable that hasn't ever been defined, we get an error.
###Code
print(seven)
###Output
_____no_output_____
###Markdown
That's **not** the same as an empty box, well labeled:
###Code
nothing = None
print(nothing)
type(None)
###Output
_____no_output_____
###Markdown
(None is the special python value for a no-value variable.) *Supplementary Materials*: There's more on variables at http://swcarpentry.github.io/python-novice-inflammation/01-numpy/index.html Anywhere we could put a raw number, we can put a variable label, and that works fine:
###Code
print(5*six)
scary = six*six*six
print(scary)
###Output
216
###Markdown
Reassignment and multiple labels But here's the real scary thing: it seems like we can put something else in that box:
###Code
scary = 25
print(scary)
###Output
25
###Markdown
Note that **the data that was there before has been lost**. No labels refer to it any more - so it has been "Garbage Collected"! We might imagine something pulled out of the box, and thrown on the floor, to make way for the next occupant. In fact, though, it is the **label** that has moved. We can see this because we have more than one label refering to the same box:
###Code
name = "James"
nom = name
print(nom)
print(name)
###Output
James
###Markdown
And we can move just one of those labels:
###Code
nom = "Hetherington"
print(name)
print(nom)
###Output
Hetherington
###Markdown
So we can now develop a better understanding of our labels and boxes: each box is a piece of space (an *address*) in computer memory.Each label (variable) is a reference to such a place. When the number of labels on a box ("variables referencing an address") gets down to zero, then the data in the box cannot be found any more. After a while, the language's "Garbage collector" will wander by, notice a box with no labels, and throw the data away, **making that boxavailable for more data**. Old fashioned languages like C and Fortran don't have Garbage collectors. So a memory address with no references to itstill takes up memory, and the computer can more easily run out. So when I write:
###Code
name = "Jim"
###Output
_____no_output_____
###Markdown
The following things happen: 1. A new text **object** is created, and an address in memory is found for it.1. The variable "name" is moved to refer to that address.1. The old address, containing "James", now has no labels.1. The garbage collector frees the memory at the old address. **Supplementary materials**: There's an online python tutor which is great for visualising memory and references. Try the [scenario we just looked at](http://www.pythontutor.com/visualize.htmlcode=name+%3D+%22James%22%0Anom+%3D+name%0Aprint+nom%0Aprint+name%0Anom+%3D+%22Hetherington%22%0Aprint+nom%0Aprint+name%0Aname%3D+%22Jim%22%0Aprint+nom%0Aprint+name&mode=display&origin=opt-frontend.js&cumulative=false&heapPrimitives=true&textReferences=false&py=2&rawInputLstJSON=%5B%5D&curInstr=0)Labels are contained in groups called "frames": our frame contains two labels, 'nom' and 'name'. Objects and types An object, like `name`, has a type. In the online python tutor example, we see that the objects have type "str".`str` means a text object: Programmers call these 'strings'.
###Code
type(name)
###Output
_____no_output_____
###Markdown
Depending on its type, an object can have different *properties*: data fields Inside the object. Consider a Python complex number for example:
###Code
z = 3+1j
###Output
_____no_output_____
###Markdown
We can see what properties and methods an object has available using the `dir` function:
###Code
dir(z)
###Output
_____no_output_____
###Markdown
You can see that there are several methods whose name starts and ends with `__` (e.g. `__init__`): these are special methods that Python uses internally, and we will discuss some of them later on in this course. The others (in this case, `conjugate`, `img` and `real`) are the methods and fields through which we can interact with this object.
###Code
type(z)
z.real
z.imag
###Output
_____no_output_____
###Markdown
A property of an object is accessed with a dot. The jargon is that the "dot operator" is used to obtain a property of an object. When we try to access a property that doesn't exist, we get an error:
###Code
z.wrong
###Output
_____no_output_____
###Markdown
Reading error messages. It's important, when learning to program, to develop an ability to read an error message and find, from in amongstall the confusing noise, the bit of the error message which tells you what to change! We don't yet know what is meant by `AttributeError`, or "Traceback".
###Code
z2 = 5-6j
print("Gets to here")
print(z.wrong)
print("Didn't get to here")
###Output
Gets to here
###Markdown
But in the above, we can see that the error happens on the **third** line of our code cell. We can also see that the error message: > 'complex' object has no attribute 'wrong' ...tells us something important. Even if we don't understand the rest, this is useful for debugging! Variables and the notebook kernel When I type code in the notebook, the objects live in memory between cells.
###Code
number = 0
print(number)
###Output
0
###Markdown
If I change a variable:
###Code
number = number + 1
print(number)
###Output
1
###Markdown
Variables Variable Assignment When we generate a result, the answer is displayed, but not kept anywhere.
###Code
2*3
###Output
_____no_output_____
###Markdown
If we want to get back to that result, we have to store it. We put it in a box, with a name on the box. This is a **variable**.
###Code
six = 2*3
print(six)
###Output
6
###Markdown
If we look for a variable that hasn't ever been defined, we get an error.
###Code
print(seven)
###Output
_____no_output_____
###Markdown
That's **not** the same as an empty box, well labeled:
###Code
nothing = None
print(nothing)
type(None)
###Output
_____no_output_____
###Markdown
(None is the special python value for a no-value variable.) *Supplementary Materials*: There's more on variables at http://swcarpentry.github.io/python-novice-inflammation/01-numpy.html Anywhere we could put a raw number, we can put a variable label, and that works fine:
###Code
print(5*six)
scary = six*six*six
print(scary)
###Output
216
###Markdown
Reassignment and multiple labels But here's the real scary thing: it seems like we can put something else in that box:
###Code
scary = 25
print(scary)
###Output
25
###Markdown
Note that **the data that was there before has been lost**. No labels refer to it any more - so it has been "Garbage Collected"! We might imagine something pulled out of the box, and thrown on the floor, to make way for the next occupant. In fact, though, it is the **label** that has moved. We can see this because we have more than one label refering to the same box:
###Code
name = "James"
nom = name
print(nom)
print(name)
###Output
James
###Markdown
And we can move just one of those labels:
###Code
nom = "Hetherington"
print(name)
print(nom)
###Output
Hetherington
###Markdown
So we can now develop a better understanding of our labels and boxes: each box is a piece of space (an *address*) in computer memory.Each label (variable) is a reference to such a place. When the number of labels on a box ("variables referencing an address") gets down to zero, then the data in the box cannot be found any more. After a while, the language's "Garbage collector" will wander by, notice a box with no labels, and throw the data away, **making that boxavailable for more data**. Old fashioned languages like C and Fortran don't have Garbage collectors. So a memory address with no references to itstill takes up memory, and the computer can more easily run out. So when I write:
###Code
name = "Jim"
###Output
_____no_output_____
###Markdown
The following things happen: 1. A new text **object** is created, and an address in memory is found for it.1. The variable "name" is moved to refer to that address.1. The old address, containing "James", now has no labels.1. The garbage collector frees the memory at the old address. **Supplementary materials**: There's an online python tutor which is great for visualising memory and references. Try the [scenario we just looked at](http://www.pythontutor.com/visualize.htmlcode=name+%3D+%22James%22%0Anom+%3D+name%0Aprint+nom%0Aprint+name%0Anom+%3D+%22Hetherington%22%0Aprint+nom%0Aprint+name%0Aname%3D+%22Jim%22%0Aprint+nom%0Aprint+name&mode=display&origin=opt-frontend.js&cumulative=false&heapPrimitives=true&textReferences=false&py=2&rawInputLstJSON=%5B%5D&curInstr=0)Labels are contained in groups called "frames": our frame contains two labels, 'nom' and 'name'. Objects and types An object, like `name`, has a type. In the online python tutor example, we see that the objects have type "str".`str` means a text object: Programmers call these 'strings'.
###Code
type(name)
###Output
_____no_output_____
###Markdown
Depending on its type, an object can have different *properties*: data fields Inside the object. Consider a Python complex number for example:
###Code
z=3+1j
dir(z)
type(z)
z.real
z.imag
###Output
_____no_output_____
###Markdown
A property of an object is accessed with a dot. The jargon is that the "dot operator" is used to obtain a property of an object. When we try to access a property that doesn't exist, we get an error:
###Code
z.wrong
###Output
_____no_output_____
###Markdown
Reading error messages. It's important, when learning to program, to develop an ability to read an error message and find, from in amongstall the confusing noise, the bit of the error message which tells you what to change! We don't yet know what is meant by `AttributeError`, or "Traceback".
###Code
z2=5-6j
print("Gets to here")
print(z.wrong)
print("Didn't get to here")
###Output
Gets to here
###Markdown
But in the above, we can see that the error happens on the **third** line of our code cell. We can also see that the error message: > 'complex' object has no attribute 'wrong' ...tells us something important. Even if we don't understand the rest, this is useful for debugging! Variables and the notebook kernel When I type code in the notebook, the objects live in memory between cells.
###Code
number = 0
print(number)
###Output
0
###Markdown
If I change a variable:
###Code
number = number +1
print(number)
###Output
1
###Markdown
Variables Variable Assignment When we generate a result, the answer is displayed, but not kept anywhere.
###Code
2 * 3
###Output
_____no_output_____
###Markdown
If we want to get back to that result, we have to store it. We put it in a box, with a name on the box. This is a **variable**.
###Code
six = 2 * 3
print(six)
###Output
6
###Markdown
If we look for a variable that hasn't ever been defined, we get an error.
###Code
print(seven)
###Output
_____no_output_____
###Markdown
That's **not** the same as an empty box, well labeled:
###Code
nothing = None
print(nothing)
type(None)
###Output
_____no_output_____
###Markdown
(None is the special python value for a no-value variable.) *Supplementary Materials*: There's more on variables at [Software Carpentry's Python lesson](http://swcarpentry.github.io/python-novice-inflammation/01-numpy/index.html). Anywhere we could put a raw number, we can put a variable label, and that works fine:
###Code
print(5 * six)
scary = six * six * six
print(scary)
###Output
216
###Markdown
Reassignment and multiple labels But here's the real scary thing: it seems like we can put something else in that box:
###Code
scary = 25
print(scary)
###Output
25
###Markdown
Note that **the data that was there before has been lost**. No labels refer to it any more - so it has been "Garbage Collected"! We might imagine something pulled out of the box, and thrown on the floor, to make way for the next occupant. In fact, though, it is the **label** that has moved. We can see this because we have more than one label refering to the same box:
###Code
name = "Eric"
nom = name
print(nom)
print(name)
###Output
Eric
###Markdown
And we can move just one of those labels:
###Code
nom = "Idle"
print(name)
print(nom)
###Output
Idle
###Markdown
So we can now develop a better understanding of our labels and boxes: each box is a piece of space (an *address*) in computer memory.Each label (variable) is a reference to such a place. When the number of labels on a box ("variables referencing an address") gets down to zero, then the data in the box cannot be found any more. After a while, the language's "Garbage collector" will wander by, notice a box with no labels, and throw the data away, **making that boxavailable for more data**. Old fashioned languages like C and Fortran don't have Garbage collectors. So a memory address with no references to itstill takes up memory, and the computer can more easily run out. So when I write:
###Code
name = "Michael"
###Output
_____no_output_____
###Markdown
The following things happen: 1. A new text **object** is created, and an address in memory is found for it.1. The variable "name" is moved to refer to that address.1. The old address, containing "James", now has no labels.1. The garbage collector frees the memory at the old address. **Supplementary materials**: There's an online python tutor which is great for visualising memory and references. Try the [scenario we just looked at](http://www.pythontutor.com/visualize.htmlcode=name%20%3D%20%22Eric%22%0Anom%20%3D%20name%0Aprint%28nom%29%0Aprint%28name%29%0Anom%20%3D%20%22Idle%22%0Aprint%28name%29%0Aprint%28nom%29%0Aname%20%3D%20%22Michael%22%0Aprint%28name%29%0Aprint%28nom%29%0A&cumulative=false&curInstr=0&heapPrimitives=nevernest&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false).Labels are contained in groups called "frames": our frame contains two labels, 'nom' and 'name'. Objects and types An object, like `name`, has a type. In the online python tutor example, we see that the objects have type "str".`str` means a text object: Programmers call these 'strings'.
###Code
type(name)
###Output
_____no_output_____
###Markdown
Depending on its type, an object can have different *properties*: data fields Inside the object. Consider a Python complex number for example:
###Code
z = 3 + 1j
###Output
_____no_output_____
###Markdown
We can see what properties and methods an object has available using the `dir` function:
###Code
dir(z)
###Output
_____no_output_____
###Markdown
You can see that there are several methods whose name starts and ends with `__` (e.g. `__init__`): these are special methods that Python uses internally, and we will discuss some of them later on in this course. The others (in this case, `conjugate`, `img` and `real`) are the methods and fields through which we can interact with this object.
###Code
type(z)
z.real
z.imag
###Output
_____no_output_____
###Markdown
A property of an object is accessed with a dot. The jargon is that the "dot operator" is used to obtain a property of an object. When we try to access a property that doesn't exist, we get an error:
###Code
z.wrong
###Output
_____no_output_____
###Markdown
Reading error messages. It's important, when learning to program, to develop an ability to read an error message and find, from in amongstall the confusing noise, the bit of the error message which tells you what to change! We don't yet know what is meant by `AttributeError`, or "Traceback".
###Code
z2 = 5 - 6j
print("Gets to here")
print(z.wrong)
print("Didn't get to here")
###Output
Gets to here
###Markdown
But in the above, we can see that the error happens on the **third** line of our code cell. We can also see that the error message: > 'complex' object has no attribute 'wrong' ...tells us something important. Even if we don't understand the rest, this is useful for debugging! Variables and the notebook kernel When I type code in the notebook, the objects live in memory between cells.
###Code
number = 0
print(number)
###Output
0
###Markdown
If I change a variable:
###Code
number = number + 1
print(number)
###Output
1
|
Workshops/NNFL workshop 2/.ipynb_checkpoints/2. Starting with NNs in PyTorch (MNIST)-checkpoint.ipynb
|
###Markdown
Neural networks with PyTorchDeep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks.
###Code
# Import necessary packages
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import torch
import helper
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample belowOur goal is to build a neural network that can take one of these images and predict the digit in the image.First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later.
###Code
### Run this cell
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
###Output
Using downloaded and verified file: /home/deepak/.pytorch/MNIST_data/MNIST/raw/train-images-idx3-ubyte.gz
Extracting /home/deepak/.pytorch/MNIST_data/MNIST/raw/train-images-idx3-ubyte.gz to /home/deepak/.pytorch/MNIST_data/MNIST/raw
###Markdown
We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like```pythonfor image, label in trainloader: do things with images and labels```You'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images.
###Code
dataiter = iter(trainloader)
images, labels = dataiter.next()
print(type(images))
print(images.shape)
print(labels.shape)
###Output
<class 'torch.Tensor'>
torch.Size([64, 1, 28, 28])
torch.Size([64])
###Markdown
This is what one of the images looks like.
###Code
plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r');
###Output
_____no_output_____
###Markdown
First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures.The networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors.Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next.> **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next.
###Code
## Solution
def activation(x):
return 1/(1+torch.exp(-x))
# Flatten the input images
inputs = images.view(images.shape[0], -1)
# Create parameters
w1 = torch.randn(784, 256)
b1 = torch.randn(256)
w2 = torch.randn(256, 10)
b2 = torch.randn(10)
h = activation(torch.mm(inputs, w1) + b1)
out = torch.mm(h, w2) + b2
###Output
_____no_output_____
###Markdown
Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this:Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class.To calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like$$\Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}}$$What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one.> **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns.
###Code
## Solution
def softmax(x):
return torch.exp(x)/torch.sum(torch.exp(x), dim=1).view(-1, 1)
probabilities = softmax(out)
# Does it have the right shape? Should be (64, 10)
print(probabilities.shape)
# Does it sum to 1?
print(probabilities.sum(dim=1))
###Output
torch.Size([64, 10])
tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
1.0000])
###Markdown
Building networks with PyTorchPyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output.
###Code
from torch import nn
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
# Define sigmoid activation and softmax output
self.sigmoid = nn.Sigmoid()
self.softmax = nn.Softmax(dim=1)
# oredr doen't measure in the init function.
def forward(self, x):
# Pass the input tensor through each of our operations
x = self.hidden(x)
x = self.sigmoid(x)
x = self.output(x)
x = self.softmax(x)
return x
###Output
_____no_output_____
###Markdown
Let's go through this bit by bit.```pythonclass Network(nn.Module):```Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything.```pythonself.hidden = nn.Linear(784, 256)```This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`.```pythonself.output = nn.Linear(256, 10)```Similarly, this creates another linear transformation with 256 inputs and 10 outputs.```pythonself.sigmoid = nn.Sigmoid()self.softmax = nn.Softmax(dim=1)```Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns.```pythondef forward(self, x):```PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method.```pythonx = self.hidden(x)x = self.sigmoid(x)x = self.output(x)x = self.softmax(x)```Here the input tensor `x` is passed through each operation a reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method.Now we can create a `Network` object.
###Code
# Create the network and look at it's text representation
model = Network()
model
###Output
_____no_output_____
###Markdown
You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`.
###Code
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
# Inputs to hidden layer linear transformation
self.hidden = nn.Linear(784, 256)
# Output layer, 10 units - one for each digit
self.output = nn.Linear(256, 10)
def forward(self, x):
# Hidden layer with sigmoid activation
x = F.sigmoid(self.hidden(x))
# Output layer with softmax activation
x = F.softmax(self.output(x), dim=1)
return x
###Output
_____no_output_____
###Markdown
Activation functionsSo far we've only been looking at the softmax activation, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit). Mostly used to make non linearity in network.In practice, the ReLU function is used almost exclusively as the activation function for hidden layers. Your Turn to Build a Network> **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function.It's good practice to name your layers by their type of network, for instance 'fc' to represent a fully-connected layer. As you code your solution, use `fc1`, `fc2`, and `fc3` as your layer names.
###Code
## Solution
class Network(nn.Module):
def __init__(self):
super().__init__()
# Defining the layers, 128, 64, 10 units each
self.fc1 = nn.Linear(784, 128)
self.fc2 = nn.Linear(128, 64)
# Output layer, 10 units - one for each digit
self.fc3 = nn.Linear(64, 10)
def forward(self, x):
''' Forward pass through the network, returns the output logits '''
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
x = F.softmax(x, dim=1)
return x
model = Network()
model
###Output
_____no_output_____
###Markdown
Initializing weights and biasesThe weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance.
###Code
print(model.fc1.weight) # .weight for weight
print(model.fc1.bias) # .bias for bais
###Output
Parameter containing:
tensor([[-0.0045, -0.0126, -0.0124, ..., -0.0300, 0.0095, 0.0086],
[ 0.0290, -0.0003, 0.0281, ..., -0.0241, -0.0256, 0.0218],
[ 0.0036, 0.0055, -0.0282, ..., -0.0211, 0.0298, 0.0268],
...,
[-0.0286, 0.0135, 0.0043, ..., -0.0015, 0.0218, -0.0307],
[ 0.0005, -0.0308, -0.0111, ..., 0.0262, 0.0187, 0.0061],
[ 0.0130, 0.0002, 0.0050, ..., 0.0255, 0.0198, -0.0293]],
requires_grad=True)
Parameter containing:
tensor([ 0.0333, 0.0230, 0.0170, 0.0075, -0.0234, 0.0309, -0.0287, 0.0177,
-0.0129, -0.0180, 0.0207, 0.0104, -0.0152, 0.0028, 0.0213, 0.0340,
-0.0240, -0.0132, -0.0076, 0.0082, 0.0231, -0.0109, 0.0239, 0.0170,
-0.0138, 0.0148, -0.0272, -0.0117, 0.0298, -0.0013, 0.0052, -0.0317,
-0.0194, -0.0303, -0.0145, 0.0122, -0.0128, -0.0152, -0.0075, 0.0096,
-0.0276, 0.0248, 0.0105, -0.0143, 0.0285, -0.0333, 0.0117, -0.0073,
0.0141, 0.0090, 0.0142, 0.0229, -0.0257, -0.0219, -0.0165, -0.0313,
0.0310, -0.0293, -0.0184, -0.0168, -0.0096, 0.0100, 0.0284, -0.0342,
-0.0018, -0.0220, 0.0215, 0.0248, 0.0164, 0.0053, -0.0183, -0.0082,
-0.0331, -0.0168, 0.0170, 0.0155, -0.0345, 0.0179, -0.0005, 0.0055,
0.0286, 0.0306, 0.0094, -0.0096, -0.0046, -0.0144, -0.0188, -0.0263,
0.0018, -0.0052, -0.0269, 0.0341, -0.0321, 0.0337, -0.0131, 0.0061,
0.0142, 0.0337, -0.0146, -0.0325, -0.0118, 0.0340, 0.0255, -0.0325,
0.0085, 0.0332, -0.0020, 0.0142, 0.0184, 0.0157, 0.0180, -0.0224,
0.0099, -0.0329, -0.0327, -0.0324, 0.0191, -0.0226, -0.0276, -0.0228,
0.0340, -0.0089, 0.0013, -0.0094, -0.0141, 0.0080, -0.0088, 0.0341],
requires_grad=True)
###Markdown
For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values.
###Code
# Set biases to all zeros
model.fc1.bias.data.fill_(0)
# sample from random normal with standard dev = 0.01
model.fc1.weight.data.normal_(std=0.01)
###Output
_____no_output_____
###Markdown
Forward passNow that we have a network, let's see what happens when we pass in an image.
###Code
# Grab some data
dataiter = iter(trainloader)
images, labels = dataiter.next()
# Resize images into a 1D vector, new shape is (batch size, color channels, image pixels)
images.resize_(64, 1, 784)
# or images.resize_(images.shape[0], 1, 784) to automatically get batch size
# Forward pass through the network
img_idx = 0
ps = model.forward(images[img_idx,:])
img = images[img_idx]
helper.view_classify(img.view(1, 28, 28), ps)
###Output
_____no_output_____
###Markdown
As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random! Using `nn.Sequential`PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.htmltorch.nn.Sequential)). Using this to build the equivalent network:
###Code
# Hyperparameters for our network
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], output_size),
nn.Softmax(dim=1))
print(model)
# Forward pass through the network and display output
images, labels = next(iter(trainloader))
images.resize_(images.shape[0], 1, 784)
ps = model.forward(images[0,:])
helper.view_classify(images[0].view(1, 28, 28), ps)
###Output
_____no_output_____
###Markdown
The operations are availble by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`.
###Code
print(model[0])
model[0].weight
###Output
_____no_output_____
###Markdown
You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_.
###Code
from collections import OrderedDict
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('output', nn.Linear(hidden_sizes[1], output_size)),
('softmax', nn.Softmax(dim=1))]))
model
###Output
_____no_output_____
###Markdown
Now you can access layers either by integer or the name
###Code
print(model[0])
print(model.fc1)
###Output
_____no_output_____
|
jupyter-notebooks/kmeans_clustering_v01.ipynb
|
###Markdown
Information GeometryAuthor: Micael Veríssimo de Araújo ([email protected])
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import seaborn as sns
from sklearn.cluster import KMeans
data_files_path = '../data_files/data17_13TeV.AllPeriods.sgn.probes_lhmedium_EGAM2.bkg.VProbes_EGAM7.GRL_v97/'
file_name = 'data17_13TeV.AllPeriods.sgn.probes_lhmedium_EGAM2.bkg.VProbes_EGAM7.GRL_v97_et0_eta0.npz'
plots_path = '../plots_clusterizacao/'
my_seed = 13
jpsi_data = dict(np.load(data_files_path+file_name))
jpsi_data.keys()
###Output
_____no_output_____
###Markdown
As variáveis presentes neste data set são:
###Code
list_of_features = list(jpsi_data['features'])
print(list_of_features)
###Output
['avgmu', 'L2Calo_ring_0', 'L2Calo_ring_1', 'L2Calo_ring_2', 'L2Calo_ring_3', 'L2Calo_ring_4', 'L2Calo_ring_5', 'L2Calo_ring_6', 'L2Calo_ring_7', 'L2Calo_ring_8', 'L2Calo_ring_9', 'L2Calo_ring_10', 'L2Calo_ring_11', 'L2Calo_ring_12', 'L2Calo_ring_13', 'L2Calo_ring_14', 'L2Calo_ring_15', 'L2Calo_ring_16', 'L2Calo_ring_17', 'L2Calo_ring_18', 'L2Calo_ring_19', 'L2Calo_ring_20', 'L2Calo_ring_21', 'L2Calo_ring_22', 'L2Calo_ring_23', 'L2Calo_ring_24', 'L2Calo_ring_25', 'L2Calo_ring_26', 'L2Calo_ring_27', 'L2Calo_ring_28', 'L2Calo_ring_29', 'L2Calo_ring_30', 'L2Calo_ring_31', 'L2Calo_ring_32', 'L2Calo_ring_33', 'L2Calo_ring_34', 'L2Calo_ring_35', 'L2Calo_ring_36', 'L2Calo_ring_37', 'L2Calo_ring_38', 'L2Calo_ring_39', 'L2Calo_ring_40', 'L2Calo_ring_41', 'L2Calo_ring_42', 'L2Calo_ring_43', 'L2Calo_ring_44', 'L2Calo_ring_45', 'L2Calo_ring_46', 'L2Calo_ring_47', 'L2Calo_ring_48', 'L2Calo_ring_49', 'L2Calo_ring_50', 'L2Calo_ring_51', 'L2Calo_ring_52', 'L2Calo_ring_53', 'L2Calo_ring_54', 'L2Calo_ring_55', 'L2Calo_ring_56', 'L2Calo_ring_57', 'L2Calo_ring_58', 'L2Calo_ring_59', 'L2Calo_ring_60', 'L2Calo_ring_61', 'L2Calo_ring_62', 'L2Calo_ring_63', 'L2Calo_ring_64', 'L2Calo_ring_65', 'L2Calo_ring_66', 'L2Calo_ring_67', 'L2Calo_ring_68', 'L2Calo_ring_69', 'L2Calo_ring_70', 'L2Calo_ring_71', 'L2Calo_ring_72', 'L2Calo_ring_73', 'L2Calo_ring_74', 'L2Calo_ring_75', 'L2Calo_ring_76', 'L2Calo_ring_77', 'L2Calo_ring_78', 'L2Calo_ring_79', 'L2Calo_ring_80', 'L2Calo_ring_81', 'L2Calo_ring_82', 'L2Calo_ring_83', 'L2Calo_ring_84', 'L2Calo_ring_85', 'L2Calo_ring_86', 'L2Calo_ring_87', 'L2Calo_ring_88', 'L2Calo_ring_89', 'L2Calo_ring_90', 'L2Calo_ring_91', 'L2Calo_ring_92', 'L2Calo_ring_93', 'L2Calo_ring_94', 'L2Calo_ring_95', 'L2Calo_ring_96', 'L2Calo_ring_97', 'L2Calo_ring_98', 'L2Calo_ring_99', 'L2Calo_et', 'L2Calo_eta', 'L2Calo_phi', 'L2Calo_reta', 'L2Calo_eratio', 'L2Calo_f1', 'el_lhtight', 'el_lhmedium', 'el_lhloose', 'el_lhvloose', 'et', 'eta', 'phi', 'eratio', 'reta', 'rphi', 'f1', 'f3', 'rhad', 'rhad1', 'wtots1', 'weta1', 'weta2', 'e277', 'deltaE', 'T0HLTElectronT2CaloTight', 'T0HLTElectronT2CaloMedium', 'T0HLTElectronT2CaloLoose', 'T0HLTElectronT2CaloVLoose', 'HLT__isLHTight', 'HLT__isLHMedium', 'HLT__isLHLoose', 'HLT__isLHVLoose']
###Markdown
Para o processo de clusterização serão utilizadas $2$ variáveis: $\langle \mu \rangle$ e $E_T$.
###Code
var_indexes = [list_of_features.index('avgmu'),
list_of_features.index('L2Calo_et'),]
print(var_indexes)
data_ = jpsi_data['data'][:, var_indexes]
mu_filter = data_[:,0] <= 60
sgn_filter = jpsi_data['target'][mu_filter]==1
bkg_filter = jpsi_data['target'][mu_filter]==0
data_ = data_[mu_filter,:]
print(data_.shape)
sgn_choices_filter = np.random.choice(data_[sgn_filter].shape[0], size=300)
bkg_choices_filter = np.random.choice(data_[bkg_filter].shape[0], size=300)
choices_filter = np.concatenate((sgn_choices_filter,bkg_choices_filter))
data_ = data_[choices_filter,:]
y = jpsi_data['target'][choices_filter]
print(data_.shape)
###Output
(600, 2)
###Markdown
Clusterização Utilizando Divergências de BregmanAs divergências de Bregman são divergências da forma**Definição** (Bregman, 1967; Censor and Zenios, 1998) Seja $\phi : S \to \mathbb{R}$, $S = \text{dom}(\phi)$ uma função estritamente convexa definida em um conjunto convexo $S \subset \mathbb{R}^d$ tal que $\phi$ é diferenciável em seu interior relativo $(\text{ri}(S))$, assumindo $\text{ri}(S)$ não vazio. A divergência de Bregman $D_{\phi} : S\times \text{ri}(S) \to [0,\infty)$ é definida como:$$D_{\phi}(x,y) = \phi(x) - \phi(y) - \langle x-y, \nabla\phi(y)\rangle$$ Usando Ringer
###Code
km = KMeans(n_clusters = 3, n_jobs = 4, random_state=my_seed)
km.fit(data_)
centers = km.cluster_centers_
print(centers)
plt.plot(centers[:, 0], centers[:, 1], '*')
plt.xlabel(r'$\langle\mu\rangle$', fontsize=15)
plt.ylabel(r'$E_T$', fontsize=15)
plt.show()
#this will tell us to which cluster does the data observations belong.
new_labels = km.labels_
# Plot the identified clusters and compare with the answers
fig, axes = plt.subplots(1, 2, figsize=(16,8))
scarter = axes[0].scatter(data_[:, 0], data_[:, 1], c=y, cmap='inferno',
edgecolor='k', s=50, alpha=.7)
axes[0].legend(*scarter.legend_elements(),
loc="best", title="Classes", fontsize='x-large')
scarter1 = axes[1].scatter(data_[:, 0], data_[:, 1], c=new_labels, cmap='jet',
edgecolor='k', s=50, alpha=.2)
axes[1].legend(*scarter1.legend_elements(),
loc="best", title="Clusters", fontsize='x-large')
axes[0].set_xlabel(r'$\langle\mu\rangle$', fontsize=18)
axes[0].set_ylabel(r'$E_T$', fontsize=18)
axes[1].set_xlabel(r'$\langle\mu\rangle$', fontsize=18)
axes[1].set_ylabel(r'$E_T$', fontsize=18)
axes[0].tick_params(direction='in', length=10, width=5, colors='k', labelsize=20)
axes[1].tick_params(direction='in', length=10, width=5, colors='k', labelsize=20)
axes[0].set_title('Actual', fontsize=18)
axes[1].set_title('Predicted', fontsize=18)
plt.figure(figsize=(10,8))
plt.plot(data_[:, 0], data_[:, 1], 'o')
plt.xlabel(r'$\langle\mu\rangle$', fontsize=15)
plt.ylabel(r'$E_T$', fontsize=15)
plt.show()
from scipy import stats
a = stats.zscore(data_[:,0])
plt.figure(figsize=(10,8))
plt.hist(a, bins=50)
plt.yscale('log')
#plt.hist(sgn_data[:,0], bins=30)
plt.show()
plt.figure(figsize=(10,8))
plt.hist(data_[:,0], bins='sqrt')
#plt.hist(sgn_data[:,0], bins=30)
plt.show()
fig = plt.figure(figsize=(10,8))
ax = fig.add_subplot(111)#, projection='3d')
ax.scatter(data_[:,1], data_[:,0], s=10, alpha=0.6, edgecolors='w')
#ax.scatter(np.sum(bkg_data[:,1:], axis=1), bkg_data[:,0], s=10, alpha=0.6, edgecolors='w')
ax.set_xlabel(r'$E_{T_{HAD}}$')
ax.set_ylabel(r'$\eta$')
#ax.set_zlabel(r'$\langle\mu\rangle$')
plt.show()
import pandas as pd
df = pd.DataFrame({'Sinal' : [95550],
'TTbar' : [775105.],
'Wbb' : [48691.],
'Wbl' : [8106.],
'Wcc' : [68830.],
'Wcl' : [937593.],
'Wll' : [2078541.],
'WW' : [290807.],
'WZ' : [345670.],
'ZZ' : [12881.],
}, index=['#Eventos'])
#df = df.T
df.head()
df.iloc[0,1:].sum()
df['Total de Background'] = df.iloc[0,1:].sum()
df = df.T
df
df.loc['Sinal', 'Número de Eventos']
df['Signal/background'] = df.loc['Sinal', '#Eventos']/df['#Eventos']
df
df['']
print(df.to_latex())
###Output
\begin{tabular}{lrr}
\toprule
{} & \#Eventos & Signal/background \\
\midrule
Sinal & 95550.0 & 1.000000 \\
TTbar & 775105.0 & 0.123274 \\
Wbb & 48691.0 & 1.962375 \\
Wbl & 8106.0 & 11.787565 \\
Wcc & 68830.0 & 1.388203 \\
Wcl & 937593.0 & 0.101910 \\
Wll & 2078541.0 & 0.045970 \\
WW & 290807.0 & 0.328568 \\
WZ & 345670.0 & 0.276420 \\
ZZ & 12881.0 & 7.417902 \\
Total de Background & 4566224.0 & 0.020925 \\
\bottomrule
\end{tabular}
|
_posts/pandas/pie/pie-charts.ipynb
|
###Markdown
Pie Charts in Cufflinks
###Code
import cufflinks as cf
import pandas as pd
cf.set_config_file(world_readable=True,offline=False)
###Output
_____no_output_____
###Markdown
`datagen` can now generate a DataFrame with the structured required for a pie charts
###Code
pie=cf.datagen.pie()
pie.head()
###Output
_____no_output_____
###Markdown
`iplot` now accepts the paramter `kind=pie` to generate a pie chart`labels` indicates the column that contains the category labels, and `values` indicates the column that contain the values to be charted
###Code
pie.iplot(kind='pie',labels='labels',values='values')
###Output
_____no_output_____
###Markdown
Extra parameters can also be passed - `sort`: If `True` it sorts the labels by value- `pull`: Pulls the slices from the centre- `hole`: Sets the size of the inner hole- `textposition`: Sets the position of the legends for each slice (`'outside'`|`'inside'`)- `textinfo`: Sets the information to be displayed on the legends
###Code
pie.iplot(kind='pie',labels='labels',values='values',pull=.2,hole=.2,
colorscale='blues',textposition='outside',textinfo='value+percent')
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install publisher --upgrade
import publisher
publisher.publish(
'pie-charts.ipynb', 'pandas/pie-charts/', 'Pie Charts',
'How to make interactive Pie charts in Cufflinks with Plotly. '
'Two examples of Pie charts with Pandas and Cufflinks.',
title = 'Pandas Pie Charts | plotly',
thumbnail='/images/pie.png', language='pandas',
page_type='example_index', has_thumbnail='true', display_as='chart_type', order=22)
###Output
_____no_output_____
|
examples/notebooks/wls.ipynb
|
###Markdown
Weighted Least Squares
###Code
%matplotlib inline
import numpy as np
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from statsmodels.iolib.table import (SimpleTable, default_txt_fmt)
np.random.seed(1024)
###Output
_____no_output_____
###Markdown
WLS Estimation Artificial data: Heteroscedasticity 2 groups Model assumptions: * Misspecification: true model is quadratic, estimate only linear * Independent noise/error term * Two groups for error variance, low and high variance groups
###Code
nsample = 50
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, (x - 5)**2))
X = sm.add_constant(X)
beta = [5., 0.5, -0.01]
sig = 0.5
w = np.ones(nsample)
w[nsample * 6//10:] = 3
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + sig * w * e
X = X[:,[0,1]]
###Output
_____no_output_____
###Markdown
WLS knowing the true variance ratio of heteroscedasticityIn this example, `w` is the standard deviation of the error. `WLS` requires that the weights are proportional to the inverse of the error variance.
###Code
mod_wls = sm.WLS(y, X, weights=1./(w ** 2))
res_wls = mod_wls.fit()
print(res_wls.summary())
###Output
_____no_output_____
###Markdown
OLS vs. WLSEstimate an OLS model for comparison:
###Code
res_ols = sm.OLS(y, X).fit()
print(res_ols.params)
print(res_wls.params)
###Output
_____no_output_____
###Markdown
Compare the WLS standard errors to heteroscedasticity corrected OLS standard errors:
###Code
se = np.vstack([[res_wls.bse], [res_ols.bse], [res_ols.HC0_se],
[res_ols.HC1_se], [res_ols.HC2_se], [res_ols.HC3_se]])
se = np.round(se,4)
colnames = ['x1', 'const']
rownames = ['WLS', 'OLS', 'OLS_HC0', 'OLS_HC1', 'OLS_HC3', 'OLS_HC3']
tabl = SimpleTable(se, colnames, rownames, txt_fmt=default_txt_fmt)
print(tabl)
###Output
_____no_output_____
###Markdown
Calculate OLS prediction interval:
###Code
covb = res_ols.cov_params()
prediction_var = res_ols.mse_resid + (X * np.dot(covb,X.T).T).sum(1)
prediction_std = np.sqrt(prediction_var)
tppf = stats.t.ppf(0.975, res_ols.df_resid)
prstd_ols, iv_l_ols, iv_u_ols = wls_prediction_std(res_ols)
###Output
_____no_output_____
###Markdown
Draw a plot to compare predicted values in WLS and OLS:
###Code
prstd, iv_l, iv_u = wls_prediction_std(res_wls)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="Data")
ax.plot(x, y_true, 'b-', label="True")
# OLS
ax.plot(x, res_ols.fittedvalues, 'r--')
ax.plot(x, iv_u_ols, 'r--', label="OLS")
ax.plot(x, iv_l_ols, 'r--')
# WLS
ax.plot(x, res_wls.fittedvalues, 'g--.')
ax.plot(x, iv_u, 'g--', label="WLS")
ax.plot(x, iv_l, 'g--')
ax.legend(loc="best");
###Output
_____no_output_____
###Markdown
Feasible Weighted Least Squares (2-stage FWLS)Like `w`, `w_est` is proportional to the standard deviation, and so must be squared.
###Code
resid1 = res_ols.resid[w==1.]
var1 = resid1.var(ddof=int(res_ols.df_model)+1)
resid2 = res_ols.resid[w!=1.]
var2 = resid2.var(ddof=int(res_ols.df_model)+1)
w_est = w.copy()
w_est[w!=1.] = np.sqrt(var2) / np.sqrt(var1)
res_fwls = sm.WLS(y, X, 1./((w_est ** 2))).fit()
print(res_fwls.summary())
###Output
_____no_output_____
###Markdown
Weighted Least Squares
###Code
%matplotlib inline
from __future__ import print_function
import numpy as np
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from statsmodels.iolib.table import (SimpleTable, default_txt_fmt)
np.random.seed(1024)
###Output
_____no_output_____
###Markdown
WLS Estimation Artificial data: Heteroscedasticity 2 groups Model assumptions: * Misspecification: true model is quadratic, estimate only linear * Independent noise/error term * Two groups for error variance, low and high variance groups
###Code
nsample = 50
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, (x - 5)**2))
X = sm.add_constant(X)
beta = [5., 0.5, -0.01]
sig = 0.5
w = np.ones(nsample)
w[nsample * 6//10:] = 3
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + sig * w * e
X = X[:,[0,1]]
###Output
_____no_output_____
###Markdown
WLS knowing the true variance ratio of heteroscedasticityIn this example, `w` is the standard deviation of the error. `WLS` requires that the weights are proportional to the inverse of the error variance.
###Code
mod_wls = sm.WLS(y, X, weights=1./(w ** 2))
res_wls = mod_wls.fit()
print(res_wls.summary())
###Output
_____no_output_____
###Markdown
OLS vs. WLSEstimate an OLS model for comparison:
###Code
res_ols = sm.OLS(y, X).fit()
print(res_ols.params)
print(res_wls.params)
###Output
_____no_output_____
###Markdown
Compare the WLS standard errors to heteroscedasticity corrected OLS standard errors:
###Code
se = np.vstack([[res_wls.bse], [res_ols.bse], [res_ols.HC0_se],
[res_ols.HC1_se], [res_ols.HC2_se], [res_ols.HC3_se]])
se = np.round(se,4)
colnames = ['x1', 'const']
rownames = ['WLS', 'OLS', 'OLS_HC0', 'OLS_HC1', 'OLS_HC3', 'OLS_HC3']
tabl = SimpleTable(se, colnames, rownames, txt_fmt=default_txt_fmt)
print(tabl)
###Output
_____no_output_____
###Markdown
Calculate OLS prediction interval:
###Code
covb = res_ols.cov_params()
prediction_var = res_ols.mse_resid + (X * np.dot(covb,X.T).T).sum(1)
prediction_std = np.sqrt(prediction_var)
tppf = stats.t.ppf(0.975, res_ols.df_resid)
prstd_ols, iv_l_ols, iv_u_ols = wls_prediction_std(res_ols)
###Output
_____no_output_____
###Markdown
Draw a plot to compare predicted values in WLS and OLS:
###Code
prstd, iv_l, iv_u = wls_prediction_std(res_wls)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="Data")
ax.plot(x, y_true, 'b-', label="True")
# OLS
ax.plot(x, res_ols.fittedvalues, 'r--')
ax.plot(x, iv_u_ols, 'r--', label="OLS")
ax.plot(x, iv_l_ols, 'r--')
# WLS
ax.plot(x, res_wls.fittedvalues, 'g--.')
ax.plot(x, iv_u, 'g--', label="WLS")
ax.plot(x, iv_l, 'g--')
ax.legend(loc="best");
###Output
_____no_output_____
###Markdown
Feasible Weighted Least Squares (2-stage FWLS)Like ,`w`, `w_est` is proportional to the standard deviation, and so must be squared.
###Code
resid1 = res_ols.resid[w==1.]
var1 = resid1.var(ddof=int(res_ols.df_model)+1)
resid2 = res_ols.resid[w!=1.]
var2 = resid2.var(ddof=int(res_ols.df_model)+1)
w_est = w.copy()
w_est[w!=1.] = np.sqrt(var2) / np.sqrt(var1)
res_fwls = sm.WLS(y, X, 1./((w_est ** 2))).fit()
print(res_fwls.summary())
###Output
_____no_output_____
###Markdown
Weighted Least Squares
###Code
%matplotlib inline
import numpy as np
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from statsmodels.iolib.table import (SimpleTable, default_txt_fmt)
np.random.seed(1024)
###Output
_____no_output_____
###Markdown
WLS Estimation Artificial data: Heteroscedasticity 2 groups Model assumptions: * Misspecification: true model is quadratic, estimate only linear * Independent noise/error term * Two groups for error variance, low and high variance groups
###Code
nsample = 50
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, (x - 5)**2))
X = sm.add_constant(X)
beta = [5., 0.5, -0.01]
sig = 0.5
w = np.ones(nsample)
w[nsample * 6//10:] = 3
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + sig * w * e
X = X[:,[0,1]]
###Output
_____no_output_____
###Markdown
WLS knowing the true variance ratio of heteroscedasticityIn this example, `w` is the standard deviation of the error. `WLS` requires that the weights are proportional to the inverse of the error variance.
###Code
mod_wls = sm.WLS(y, X, weights=1./(w ** 2))
res_wls = mod_wls.fit()
print(res_wls.summary())
###Output
_____no_output_____
###Markdown
OLS vs. WLSEstimate an OLS model for comparison:
###Code
res_ols = sm.OLS(y, X).fit()
print(res_ols.params)
print(res_wls.params)
###Output
_____no_output_____
###Markdown
Compare the WLS standard errors to heteroscedasticity corrected OLS standard errors:
###Code
se = np.vstack([[res_wls.bse], [res_ols.bse], [res_ols.HC0_se],
[res_ols.HC1_se], [res_ols.HC2_se], [res_ols.HC3_se]])
se = np.round(se,4)
colnames = ['x1', 'const']
rownames = ['WLS', 'OLS', 'OLS_HC0', 'OLS_HC1', 'OLS_HC3', 'OLS_HC3']
tabl = SimpleTable(se, colnames, rownames, txt_fmt=default_txt_fmt)
print(tabl)
###Output
_____no_output_____
###Markdown
Calculate OLS prediction interval:
###Code
covb = res_ols.cov_params()
prediction_var = res_ols.mse_resid + (X * np.dot(covb,X.T).T).sum(1)
prediction_std = np.sqrt(prediction_var)
tppf = stats.t.ppf(0.975, res_ols.df_resid)
prstd_ols, iv_l_ols, iv_u_ols = wls_prediction_std(res_ols)
###Output
_____no_output_____
###Markdown
Draw a plot to compare predicted values in WLS and OLS:
###Code
prstd, iv_l, iv_u = wls_prediction_std(res_wls)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="Data")
ax.plot(x, y_true, 'b-', label="True")
# OLS
ax.plot(x, res_ols.fittedvalues, 'r--')
ax.plot(x, iv_u_ols, 'r--', label="OLS")
ax.plot(x, iv_l_ols, 'r--')
# WLS
ax.plot(x, res_wls.fittedvalues, 'g--.')
ax.plot(x, iv_u, 'g--', label="WLS")
ax.plot(x, iv_l, 'g--')
ax.legend(loc="best");
###Output
_____no_output_____
###Markdown
Feasible Weighted Least Squares (2-stage FWLS)Like ,`w`, `w_est` is proportional to the standard deviation, and so must be squared.
###Code
resid1 = res_ols.resid[w==1.]
var1 = resid1.var(ddof=int(res_ols.df_model)+1)
resid2 = res_ols.resid[w!=1.]
var2 = resid2.var(ddof=int(res_ols.df_model)+1)
w_est = w.copy()
w_est[w!=1.] = np.sqrt(var2) / np.sqrt(var1)
res_fwls = sm.WLS(y, X, 1./((w_est ** 2))).fit()
print(res_fwls.summary())
###Output
_____no_output_____
###Markdown
Weighted Least Squares
###Code
%matplotlib inline
import numpy as np
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from statsmodels.iolib.table import SimpleTable, default_txt_fmt
np.random.seed(1024)
###Output
_____no_output_____
###Markdown
WLS Estimation Artificial data: Heteroscedasticity 2 groups Model assumptions: * Misspecification: true model is quadratic, estimate only linear * Independent noise/error term * Two groups for error variance, low and high variance groups
###Code
nsample = 50
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, (x - 5) ** 2))
X = sm.add_constant(X)
beta = [5.0, 0.5, -0.01]
sig = 0.5
w = np.ones(nsample)
w[nsample * 6 // 10 :] = 3
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + sig * w * e
X = X[:, [0, 1]]
###Output
_____no_output_____
###Markdown
WLS knowing the true variance ratio of heteroscedasticityIn this example, `w` is the standard deviation of the error. `WLS` requires that the weights are proportional to the inverse of the error variance.
###Code
mod_wls = sm.WLS(y, X, weights=1.0 / (w ** 2))
res_wls = mod_wls.fit()
print(res_wls.summary())
###Output
_____no_output_____
###Markdown
OLS vs. WLSEstimate an OLS model for comparison:
###Code
res_ols = sm.OLS(y, X).fit()
print(res_ols.params)
print(res_wls.params)
###Output
_____no_output_____
###Markdown
Compare the WLS standard errors to heteroscedasticity corrected OLS standard errors:
###Code
se = np.vstack(
[
[res_wls.bse],
[res_ols.bse],
[res_ols.HC0_se],
[res_ols.HC1_se],
[res_ols.HC2_se],
[res_ols.HC3_se],
]
)
se = np.round(se, 4)
colnames = ["x1", "const"]
rownames = ["WLS", "OLS", "OLS_HC0", "OLS_HC1", "OLS_HC3", "OLS_HC3"]
tabl = SimpleTable(se, colnames, rownames, txt_fmt=default_txt_fmt)
print(tabl)
###Output
_____no_output_____
###Markdown
Calculate OLS prediction interval:
###Code
covb = res_ols.cov_params()
prediction_var = res_ols.mse_resid + (X * np.dot(covb, X.T).T).sum(1)
prediction_std = np.sqrt(prediction_var)
tppf = stats.t.ppf(0.975, res_ols.df_resid)
prstd_ols, iv_l_ols, iv_u_ols = wls_prediction_std(res_ols)
###Output
_____no_output_____
###Markdown
Draw a plot to compare predicted values in WLS and OLS:
###Code
prstd, iv_l, iv_u = wls_prediction_std(res_wls)
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(x, y, "o", label="Data")
ax.plot(x, y_true, "b-", label="True")
# OLS
ax.plot(x, res_ols.fittedvalues, "r--")
ax.plot(x, iv_u_ols, "r--", label="OLS")
ax.plot(x, iv_l_ols, "r--")
# WLS
ax.plot(x, res_wls.fittedvalues, "g--.")
ax.plot(x, iv_u, "g--", label="WLS")
ax.plot(x, iv_l, "g--")
ax.legend(loc="best")
###Output
_____no_output_____
###Markdown
Feasible Weighted Least Squares (2-stage FWLS)Like `w`, `w_est` is proportional to the standard deviation, and so must be squared.
###Code
resid1 = res_ols.resid[w == 1.0]
var1 = resid1.var(ddof=int(res_ols.df_model) + 1)
resid2 = res_ols.resid[w != 1.0]
var2 = resid2.var(ddof=int(res_ols.df_model) + 1)
w_est = w.copy()
w_est[w != 1.0] = np.sqrt(var2) / np.sqrt(var1)
res_fwls = sm.WLS(y, X, 1.0 / ((w_est ** 2))).fit()
print(res_fwls.summary())
###Output
_____no_output_____
###Markdown
Weighted Least Squares
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import statsmodels.api as sm
from scipy import stats
from statsmodels.iolib.table import SimpleTable, default_txt_fmt
np.random.seed(1024)
###Output
_____no_output_____
###Markdown
WLS Estimation Artificial data: Heteroscedasticity 2 groups Model assumptions: * Misspecification: true model is quadratic, estimate only linear * Independent noise/error term * Two groups for error variance, low and high variance groups
###Code
nsample = 50
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, (x - 5) ** 2))
X = sm.add_constant(X)
beta = [5.0, 0.5, -0.01]
sig = 0.5
w = np.ones(nsample)
w[nsample * 6 // 10 :] = 3
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + sig * w * e
X = X[:, [0, 1]]
###Output
_____no_output_____
###Markdown
WLS knowing the true variance ratio of heteroscedasticityIn this example, `w` is the standard deviation of the error. `WLS` requires that the weights are proportional to the inverse of the error variance.
###Code
mod_wls = sm.WLS(y, X, weights=1.0 / (w ** 2))
res_wls = mod_wls.fit()
print(res_wls.summary())
###Output
_____no_output_____
###Markdown
OLS vs. WLSEstimate an OLS model for comparison:
###Code
res_ols = sm.OLS(y, X).fit()
print(res_ols.params)
print(res_wls.params)
###Output
_____no_output_____
###Markdown
Compare the WLS standard errors to heteroscedasticity corrected OLS standard errors:
###Code
se = np.vstack(
[
[res_wls.bse],
[res_ols.bse],
[res_ols.HC0_se],
[res_ols.HC1_se],
[res_ols.HC2_se],
[res_ols.HC3_se],
]
)
se = np.round(se, 4)
colnames = ["x1", "const"]
rownames = ["WLS", "OLS", "OLS_HC0", "OLS_HC1", "OLS_HC3", "OLS_HC3"]
tabl = SimpleTable(se, colnames, rownames, txt_fmt=default_txt_fmt)
print(tabl)
###Output
_____no_output_____
###Markdown
Calculate OLS prediction interval:
###Code
covb = res_ols.cov_params()
prediction_var = res_ols.mse_resid + (X * np.dot(covb, X.T).T).sum(1)
prediction_std = np.sqrt(prediction_var)
tppf = stats.t.ppf(0.975, res_ols.df_resid)
pred_ols = res_ols.get_prediction()
iv_l_ols = pred_ols.summary_frame()["obs_ci_lower"]
iv_u_ols = pred_ols.summary_frame()["obs_ci_upper"]
###Output
_____no_output_____
###Markdown
Draw a plot to compare predicted values in WLS and OLS:
###Code
pred_wls = res_wls.get_prediction()
iv_l = pred_wls.summary_frame()["obs_ci_lower"]
iv_u = pred_wls.summary_frame()["obs_ci_upper"]
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(x, y, "o", label="Data")
ax.plot(x, y_true, "b-", label="True")
# OLS
ax.plot(x, res_ols.fittedvalues, "r--")
ax.plot(x, iv_u_ols, "r--", label="OLS")
ax.plot(x, iv_l_ols, "r--")
# WLS
ax.plot(x, res_wls.fittedvalues, "g--.")
ax.plot(x, iv_u, "g--", label="WLS")
ax.plot(x, iv_l, "g--")
ax.legend(loc="best")
###Output
_____no_output_____
###Markdown
Feasible Weighted Least Squares (2-stage FWLS)Like `w`, `w_est` is proportional to the standard deviation, and so must be squared.
###Code
resid1 = res_ols.resid[w == 1.0]
var1 = resid1.var(ddof=int(res_ols.df_model) + 1)
resid2 = res_ols.resid[w != 1.0]
var2 = resid2.var(ddof=int(res_ols.df_model) + 1)
w_est = w.copy()
w_est[w != 1.0] = np.sqrt(var2) / np.sqrt(var1)
res_fwls = sm.WLS(y, X, 1.0 / ((w_est ** 2))).fit()
print(res_fwls.summary())
###Output
_____no_output_____
###Markdown
Weighted Least Squares
###Code
%matplotlib inline
from __future__ import print_function
import numpy as np
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from statsmodels.iolib.table import (SimpleTable, default_txt_fmt)
np.random.seed(1024)
###Output
_____no_output_____
###Markdown
WLS Estimation Artificial data: Heteroscedasticity 2 groups Model assumptions: * Misspecification: true model is quadratic, estimate only linear * Independent noise/error term * Two groups for error variance, low and high variance groups
###Code
nsample = 50
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, (x - 5)**2))
X = sm.add_constant(X)
beta = [5., 0.5, -0.01]
sig = 0.5
w = np.ones(nsample)
w[nsample * 6//10:] = 3
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + sig * w * e
X = X[:,[0,1]]
###Output
_____no_output_____
###Markdown
WLS knowing the true variance ratio of heteroscedasticityIn this example, `w` is the standard deviation of the error. `WLS` requires that the weights are proportional to the inverse of the error variance.
###Code
mod_wls = sm.WLS(y, X, weights=1./(w ** 2))
res_wls = mod_wls.fit()
print(res_wls.summary())
###Output
_____no_output_____
###Markdown
OLS vs. WLSEstimate an OLS model for comparison:
###Code
res_ols = sm.OLS(y, X).fit()
print(res_ols.params)
print(res_wls.params)
###Output
_____no_output_____
###Markdown
Compare the WLS standard errors to heteroscedasticity corrected OLS standard errors:
###Code
se = np.vstack([[res_wls.bse], [res_ols.bse], [res_ols.HC0_se],
[res_ols.HC1_se], [res_ols.HC2_se], [res_ols.HC3_se]])
se = np.round(se,4)
colnames = ['x1', 'const']
rownames = ['WLS', 'OLS', 'OLS_HC0', 'OLS_HC1', 'OLS_HC3', 'OLS_HC3']
tabl = SimpleTable(se, colnames, rownames, txt_fmt=default_txt_fmt)
print(tabl)
###Output
_____no_output_____
###Markdown
Calculate OLS prediction interval:
###Code
covb = res_ols.cov_params()
prediction_var = res_ols.mse_resid + (X * np.dot(covb,X.T).T).sum(1)
prediction_std = np.sqrt(prediction_var)
tppf = stats.t.ppf(0.975, res_ols.df_resid)
prstd_ols, iv_l_ols, iv_u_ols = wls_prediction_std(res_ols)
###Output
_____no_output_____
###Markdown
Draw a plot to compare predicted values in WLS and OLS:
###Code
prstd, iv_l, iv_u = wls_prediction_std(res_wls)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="Data")
ax.plot(x, y_true, 'b-', label="True")
# OLS
ax.plot(x, res_ols.fittedvalues, 'r--')
ax.plot(x, iv_u_ols, 'r--', label="OLS")
ax.plot(x, iv_l_ols, 'r--')
# WLS
ax.plot(x, res_wls.fittedvalues, 'g--.')
ax.plot(x, iv_u, 'g--', label="WLS")
ax.plot(x, iv_l, 'g--')
ax.legend(loc="best");
###Output
_____no_output_____
###Markdown
Feasible Weighted Least Squares (2-stage FWLS)Like ,`w`, `w_est` is proportional to the standard deviation, and so must be squared.
###Code
resid1 = res_ols.resid[w==1.]
var1 = resid1.var(ddof=int(res_ols.df_model)+1)
resid2 = res_ols.resid[w!=1.]
var2 = resid2.var(ddof=int(res_ols.df_model)+1)
w_est = w.copy()
w_est[w!=1.] = np.sqrt(var2) / np.sqrt(var1)
res_fwls = sm.WLS(y, X, 1./((w_est ** 2))).fit()
print(res_fwls.summary())
###Output
_____no_output_____
###Markdown
Weighted Least Squares
###Code
%matplotlib inline
import numpy as np
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from statsmodels.iolib.table import (SimpleTable, default_txt_fmt)
np.random.seed(1024)
###Output
_____no_output_____
###Markdown
WLS Estimation Artificial data: Heteroscedasticity 2 groups Model assumptions: * Misspecification: true model is quadratic, estimate only linear * Independent noise/error term * Two groups for error variance, low and high variance groups
###Code
nsample = 50
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, (x - 5)**2))
X = sm.add_constant(X)
beta = [5., 0.5, -0.01]
sig = 0.5
w = np.ones(nsample)
w[nsample * 6//10:] = 3
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + sig * w * e
X = X[:,[0,1]]
###Output
_____no_output_____
###Markdown
WLS knowing the true variance ratio of heteroscedasticityIn this example, `w` is the standard deviation of the error. `WLS` requires that the weights are proportional to the inverse of the error variance.
###Code
mod_wls = sm.WLS(y, X, weights=1./(w ** 2))
res_wls = mod_wls.fit()
print(res_wls.summary())
###Output
_____no_output_____
###Markdown
OLS vs. WLSEstimate an OLS model for comparison:
###Code
res_ols = sm.OLS(y, X).fit()
print(res_ols.params)
print(res_wls.params)
###Output
_____no_output_____
###Markdown
Compare the WLS standard errors to heteroscedasticity corrected OLS standard errors:
###Code
se = np.vstack([[res_wls.bse], [res_ols.bse], [res_ols.HC0_se],
[res_ols.HC1_se], [res_ols.HC2_se], [res_ols.HC3_se]])
se = np.round(se,4)
colnames = ['x1', 'const']
rownames = ['WLS', 'OLS', 'OLS_HC0', 'OLS_HC1', 'OLS_HC3', 'OLS_HC3']
tabl = SimpleTable(se, colnames, rownames, txt_fmt=default_txt_fmt)
print(tabl)
###Output
_____no_output_____
###Markdown
Calculate OLS prediction interval:
###Code
covb = res_ols.cov_params()
prediction_var = res_ols.mse_resid + (X * np.dot(covb,X.T).T).sum(1)
prediction_std = np.sqrt(prediction_var)
tppf = stats.t.ppf(0.975, res_ols.df_resid)
prstd_ols, iv_l_ols, iv_u_ols = wls_prediction_std(res_ols)
###Output
_____no_output_____
###Markdown
Draw a plot to compare predicted values in WLS and OLS:
###Code
prstd, iv_l, iv_u = wls_prediction_std(res_wls)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="Data")
ax.plot(x, y_true, 'b-', label="True")
# OLS
ax.plot(x, res_ols.fittedvalues, 'r--')
ax.plot(x, iv_u_ols, 'r--', label="OLS")
ax.plot(x, iv_l_ols, 'r--')
# WLS
ax.plot(x, res_wls.fittedvalues, 'g--.')
ax.plot(x, iv_u, 'g--', label="WLS")
ax.plot(x, iv_l, 'g--')
ax.legend(loc="best");
###Output
_____no_output_____
###Markdown
Feasible Weighted Least Squares (2-stage FWLS)Like `w`, `w_est` is proportional to the standard deviation, and so must be squared.
###Code
resid1 = res_ols.resid[w==1.]
var1 = resid1.var(ddof=int(res_ols.df_model)+1)
resid2 = res_ols.resid[w!=1.]
var2 = resid2.var(ddof=int(res_ols.df_model)+1)
w_est = w.copy()
w_est[w!=1.] = np.sqrt(var2) / np.sqrt(var1)
res_fwls = sm.WLS(y, X, 1./((w_est ** 2))).fit()
print(res_fwls.summary())
###Output
_____no_output_____
###Markdown
Weighted Least Squares
###Code
%matplotlib inline
from __future__ import print_function
import numpy as np
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from statsmodels.iolib.table import (SimpleTable, default_txt_fmt)
np.random.seed(1024)
###Output
_____no_output_____
###Markdown
WLS Estimation Artificial data: Heteroscedasticity 2 groups Model assumptions: * Misspecification: true model is quadratic, estimate only linear * Independent noise/error term * Two groups for error variance, low and high variance groups
###Code
nsample = 50
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, (x - 5)**2))
X = sm.add_constant(X)
beta = [5., 0.5, -0.01]
sig = 0.5
w = np.ones(nsample)
w[nsample * 6//10:] = 3
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + sig * w * e
X = X[:,[0,1]]
###Output
_____no_output_____
###Markdown
WLS knowing the true variance ratio of heteroscedasticity
###Code
mod_wls = sm.WLS(y, X, weights=1./w)
res_wls = mod_wls.fit()
print(res_wls.summary())
###Output
_____no_output_____
###Markdown
OLS vs. WLSEstimate an OLS model for comparison:
###Code
res_ols = sm.OLS(y, X).fit()
print(res_ols.params)
print(res_wls.params)
###Output
_____no_output_____
###Markdown
Compare the WLS standard errors to heteroscedasticity corrected OLS standard errors:
###Code
se = np.vstack([[res_wls.bse], [res_ols.bse], [res_ols.HC0_se],
[res_ols.HC1_se], [res_ols.HC2_se], [res_ols.HC3_se]])
se = np.round(se,4)
colnames = ['x1', 'const']
rownames = ['WLS', 'OLS', 'OLS_HC0', 'OLS_HC1', 'OLS_HC3', 'OLS_HC3']
tabl = SimpleTable(se, colnames, rownames, txt_fmt=default_txt_fmt)
print(tabl)
###Output
_____no_output_____
###Markdown
Calculate OLS prediction interval:
###Code
covb = res_ols.cov_params()
prediction_var = res_ols.mse_resid + (X * np.dot(covb,X.T).T).sum(1)
prediction_std = np.sqrt(prediction_var)
tppf = stats.t.ppf(0.975, res_ols.df_resid)
prstd_ols, iv_l_ols, iv_u_ols = wls_prediction_std(res_ols)
###Output
_____no_output_____
###Markdown
Draw a plot to compare predicted values in WLS and OLS:
###Code
prstd, iv_l, iv_u = wls_prediction_std(res_wls)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="Data")
ax.plot(x, y_true, 'b-', label="True")
# OLS
ax.plot(x, res_ols.fittedvalues, 'r--')
ax.plot(x, iv_u_ols, 'r--', label="OLS")
ax.plot(x, iv_l_ols, 'r--')
# WLS
ax.plot(x, res_wls.fittedvalues, 'g--.')
ax.plot(x, iv_u, 'g--', label="WLS")
ax.plot(x, iv_l, 'g--')
ax.legend(loc="best");
###Output
_____no_output_____
###Markdown
Feasible Weighted Least Squares (2-stage FWLS)
###Code
resid1 = res_ols.resid[w==1.]
var1 = resid1.var(ddof=int(res_ols.df_model)+1)
resid2 = res_ols.resid[w!=1.]
var2 = resid2.var(ddof=int(res_ols.df_model)+1)
w_est = w.copy()
w_est[w!=1.] = np.sqrt(var2) / np.sqrt(var1)
res_fwls = sm.WLS(y, X, 1./w_est).fit()
print(res_fwls.summary())
###Output
_____no_output_____
|
jup_notebooks/programmierung/01_Schleifen.ipynb
|
###Markdown
Schleifen=====Stehen für wiederholte Anweisungen
###Code
n = 5
while n > 6:
print(n)
n = n - 1
print("Blastoff!")
print(n)
###Output
5
###Markdown
Logische Fehler----------------
###Code
n = 0
while n > 0:
print("einseifen")
print("abspülen")
print("abtrocknen!")
###Output
abtrocknen!
###Markdown
Eine andere Schleife----------------------  Eine Schleife verlassen-------------------------
###Code
while True:
line = input('> ')
if line == 'done' :
break
print(line)
print("Done!")
###Output
> 54
54
> done
Done!
|
booklet_hidrokit/0.4.0/ipynb/manual/taruma_0_3_5_hk102_upsampling.ipynb
|
###Markdown
Berdasarkan [102](https://github.com/taruma/hidrokit/issues/102): **upsampling data**Referensi Isu:- `hidrokit.contrib.taruma.hk98`. ([manual/notebook](https://gist.github.com/taruma/aca7f90c8fbb0034587809883d0d9e92)\). **buat ringkasan/rekap data deret waktu**Deskripsi permasalahan:- Mengubah data bulanan menjadi data harian sehingga dapat di rekap menggunakan fungsi yang tersedia di `hidrokit.contrib.taruma.hk98`.Strategi penyelesaian:- Pada melakukan upsampling, nilai yang kosong bisa diisi menggunakan metode interpolasi atau diisi dengan nilai pada awal bulan (_default_). PERSIAPAN DAN DATASET
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = [15, 5]
try:
import hidrokit
except ModuleNotFoundError:
!pip install git+https://github.com/taruma/hidrokit.git@latest -q
import hidrokit
print(f'hidrokit version: {hidrokit.__version__}')
!wget -O pet.xlsx "https://taruma.github.io/assets/hidrokit_dataset/fjmock_sample.xlsx" -q
!wget -O sta.xlsx "https://taruma.github.io/assets/hidrokit_dataset/data_daily_sample.xlsx" -q
sta_path = 'sta.xlsx'
pet_path = 'sample.xlsx'
df_pet = (pd.read_excel(pet_path, usecols=[3])
.set_index(
pd.date_range('20050101', '20160101', freq='MS', closed='left')
)
)
df_pet.head()
from hidrokit.contrib.taruma import hk88
df_sta = hk88.read_workbook(sta_path, ['STA_B'])
df_sta.head()
###Output
_____no_output_____
###Markdown
KODE
###Code
# ref: https://stackoverflow.com/q/29612705/4886384
def upsampling(df, freq='D', fill_method='ffill',
use_inter=False, inter_method='linear', inter_keys={},
reindex=False):
start = df.index.min() - pd.DateOffset(day=1)
end = df.index.max() - pd.DateOffset(day=31)
date = pd.date_range(start, end, freq=freq)
newdf = df.reindex(date)
if reindex:
return newdf
if use_inter:
return newdf.interpolate(method=inter_method, **inter_keys)
else:
return newdf.fillna(method=fill_method)
###Output
_____no_output_____
###Markdown
PENERAPANPada _notebook_ ini terdapat dua dataset yaitu `df_sta` (data hujan stasiun B) dan `df_pet` (data evapotranspirasi). Dua dataset ini memiliki perbedaan dari jumlah data dan observasinya, pada `df_sta` data dalam bentuk observasi harian, sedangkan pada `df_pet` data berupa observasi bulanan. Dalam pemodelan F.J. Mock (`hidrokit.contrib.taruma.hk96`) data masukan bisa memiliki sembarang periode (per 3 hari, 15 hari, dll). Untuk data `df_sta` bisa dilakukan rekap data menggunakan modul `contrib.taruma.hk98`, sedangkan data `df_pet` tidak bisa menyesuaikan dengan periode tersebut karena data bukan data harian.Fungsi `upsampling` menangani permasalahan tersebut dengan mengubah data bulanan menjadi harian dengan pengisian berbagai metode (interpolasi, atau isian dari awal bulan). Fungsi `upsampling`Fungsi ini mengubah data bulanan/periode ke data harian dengan menggunakan berbagai metode (isian atau interpolasi). Argumen yang harus diberikan:- `df`: dataset dalam objek `pandas.DataFrame` dengan index berupa objek `DateTimeIndex`/`Timestamp`.Argumen opsional:- `fill_method='ffill'`: mengisi data yang kosong dengan metode _forward fill_. Metode merupakan isian valid untuk argumen `method` pada metode `pandas.DataFrame.fillna()`.- `freq='D'`: mengubah dalam bentuk harian.- `use_inter=None`: isi `True` jika ingin menggunakan `inter_method`.- `inter_method='linear'`: metode interpolasi.- `inter_keys={}`: _keywords_ untuk metode interpolasi.- `reindex=False`: isi `True` jika hanya mendapatkan keluaran tanpa mengisi data yang kosong. Dataset `df_pet`Berikut informasi `df_pet` sebelum dilakukan _upsampling_.
###Code
df_pet.info()
df_pet.head()
df_pet.plot();
###Output
_____no_output_____
###Markdown
Sebelum dilakukan _upsampling_ data hanya memiliki $132$ data observasi dengan frekuensi bulanan `MS`. Menggunakan _forward fill_ (_default_)
###Code
df_pet_ffill = upsampling(df_pet)
df_pet_ffill.info()
df_pet_ffill.head()
df_pet_ffill.plot();
###Output
_____no_output_____
###Markdown
Dataframe `df_pet` setelah dilakukan _upsampling_ menjadi $4017$ data observasi dengan frekuensi harian `D`. Menggunakan interpolasi
###Code
df_pet_inter = upsampling(df_pet, use_inter=True, inter_method='linear')
df_pet_inter.info()
df_pet_inter.head()
df_pet_inter.plot();
###Output
_____no_output_____
###Markdown
Perbedaan dengan _forward fill_ dapat dilihat dari grafiknya.
###Code
df_pet_quad = upsampling(df_pet, use_inter=True, inter_method='quadratic')
df_pet_quad.info()
df_pet_quad.head()
df_pet_quad.plot();
###Output
_____no_output_____
###Markdown
Jika menggunakan metode `quadratic`, grafik menjadi lebih halus. Melakukan _reindex_ sajaArgumen ini disediakan agar memberi kebebasan jika tidak ingin menggunakan metode interpolasi/isian.
###Code
df_pet_re = upsampling(df_pet, reindex=True)
df_pet_re.info()
df_pet_re.head()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 4017 entries, 2005-01-01 to 2015-12-31
Freq: D
Data columns (total 1 columns):
PET 132 non-null float64
dtypes: float64(1)
memory usage: 62.8 KB
###Markdown
Menggabungkan dengan modul `taruma.hk98` (rekap data harian)Modul ini bisa digabungkan dengan modul `taruma.hk98` yang digunakan untuk merekap data harian.Karena `df_pet` sudah diubah menjadi data harian (`df_pet_ffill`), maka bisa digabungkan dengan `df_sta`. Dataframe `dataset` dimulai dari tahun 2005 karena `df_pet` hanya dimulai dari tahun 2005.
###Code
dataset = pd.concat([df_sta, df_pet_ffill], axis=1).loc['20050101':]
dataset
###Output
_____no_output_____
###Markdown
Setelah mempersiapkan `dataset`, dapat dilakukan rekap menggunakan modul `taruma.hk98`. Pada data curah hujan, akan dilakukan penjumlahan curah hujan, jumlah hari hujan, jumlah hari periode. Sedangkan pada data PET, hanya merekap dengan merata-ratakan nilainya setiap periode. Periode yang akan digunakan adalah per 11 hari (3 periode).
###Code
from hidrokit.contrib.taruma import hk98
NDAYS = '11D'
n_rain = lambda x: (x>0).sum()
ufunc, ufunc_col = [np.sum, n_rain, len], ['SUM', 'NRAIN', 'NDAYS']
dataset_pet = hk98.summary_station(dataset, 'PET', np.mean, 'PET',
n_days=NDAYS)
dataset_prep = hk98.summary_station(dataset, 'STA_B', ufunc, ufunc_col,
n_days=NDAYS)
data_input = pd.concat([dataset_prep, dataset_pet], axis=1)
data_input.columns = data_input.columns.droplevel(0)
data_input.head(10)
data_input.sample(10)
###Output
_____no_output_____
|
notebooks/TV_modeling_with_Virtual_People.ipynb
|
###Markdown
Modeling linear TV reach with Virtual [email protected], 2020In our [whitepaper](https://storage.googleapis.com/pub-tools-public-publication-data/pdf/1ab33ed15e0a7724c7b0b47361da16259fee8eef.pdf) we conver how VID is applied to digital event-level data. Linear TV could be measured with panel, or extrapolated from a set of metered OTT devices, and event data is not available.In this colab we will be discussing application of VID to panel data. Application to OTT devices is done analogously.We discuss two approaches:**Aggregate)** Given a panel measurement arbitrary aggregate technique is used to do reach estimate in each demographic bucket. Then appropriate amound of virtual people are sampled from the total virtual population of the bucket to match the aggregate estimate.**VID-native)** Each panelist is consistently hashed to a set of virtual people. To report on an audience one should take a union of all virtual people to which reached panelists hash. Reach is the number of these people.In both approaches the virtual people can be stored in a sketch and transmitted into the cross media measurement system's secure cardinality estimation framework to be unioned with the digital reach.The **advantage Aggregate approach** is that* it is compatible with an arbitrary TV audience measurement methodology and simply encodes it for deduplication with the digital reach.The **advantages VID-native approach** is that* it guarantees conistency for TV measurement and* it may lead to an easier engineering infrastructure.In the rest of this colab we implement these approaches conceptually and run some simple simulations. Creating panel and virtual people poolsWe assume have 4 age buckets, each of width 20 years, starting from 20 and ending at a 100. Population distribution is assumed to be equal by bucket. We sample panelists from this population and create pools of virtual people with appropriate sizes.
###Code
import hashlib
num_panelists = 100
num_vids = 1000
random_age = lambda x: hash("age-" + str(x)) % 80 + 20
def hash(x):
return int(hashlib.sha256(x.encode()).hexdigest(), 16)
def hash_all(*elements):
return hash('-'.join(map(str, elements)))
def float_hash_all(*elements):
precision = 1000000000
return (hash_all(*elements) % precision) / precision
def affinity_hash(options, *elements):
affinities = [float_hash_all(option, *elements) for option in options]
return options[affinities.index(max(affinities))]
def make_panel():
panel = list(range(num_panelists))
panelist_age = {panelist: random_age(panelist) for panelist in panel}
return panel, panelist_age
def make_virtual_people_by_age():
offset = 1000000
virtual_people_for_age_bucket = {}
age_from = 20
age_buckets = [(a, a + 20) for a in range(20, 100, 20)]
for bucket in age_buckets:
num_people_for_bucket = num_vids // len(age_buckets)
virtual_people_for_age_bucket[bucket] = list(
range(offset, offset + num_people_for_bucket))
offset = offset + num_people_for_bucket
return virtual_people_for_age_bucket
panel, panelist_age = make_panel()
people_map = make_virtual_people_by_age()
virtual_person_age_bucket = {
person: bucket
for bucket, people in people_map.items()
for person in people}
all_virtual_people = [p for p in people_map.values()]
def PanelistAgeBucket(panelist):
age = panelist_age[panelist]
for a, b in people_map:
if a <= age < b:
return a, b
assert False, "Unknown age for panelist: " + str(panelist)
total_panel_in_bucket = {}
panelists_of_bucket = {}
for p in panel:
bucket = PanelistAgeBucket(p)
total_panel_in_bucket[bucket] = total_panel_in_bucket.get(bucket, 0) + 1
panelists_of_bucket[bucket] = panelists_of_bucket.get(bucket, []) + [p]
###Output
_____no_output_____
###Markdown
AggregateWe implement the aggregate approach with `aggregate_virtual_audience_estimator` class.The basic estimation function is `create_virtual_audience_from_reach`, which given an arbitrary reach in population buckets samples virtual people from these buckets. We then implement `aggregate_virtual_audience_estimator` by first estiamting the reach in buckets from panel reach and then delegating virtual people sampling to `create_virtual_audience_from_reach`.It is very important that the core function `create_virtual_audience_from_reach` does not exlicitly depend on the panelists reached. Which means that reach estimation can be done with arbitrary technology.
###Code
def create_virtual_audience_from_reach(
audience_key, bucket_fraction_reach):
audience = []
for bucket in bucket_fraction_reach:
for person in people_map[bucket]:
if float_hash_all(
"audience", audience_key, person) < bucket_fraction_reach[bucket]:
audience.append(person)
return audience
def estimate_aggregate_reach(panelists):
bucket_reach = {}
for p in panelists:
bucket = PanelistAgeBucket(p)
bucket_reach[bucket] = bucket_reach.get(bucket, 0) + 1
bucket_fraction = {
bucket: bucket_reach[bucket] / total_panel_in_bucket[bucket]
for bucket in bucket_reach}
return bucket_fraction
def create_virtual_audience_with_aggregate_method(audience_key, panelists):
per_bucket_reach = estimate_aggregate_reach(panelists)
return create_virtual_audience_from_reach(
audience_key, per_bucket_reach)
class AggregateVirtualAudienceEstimator(object):
def __init__(self, seed):
self.seed = seed
def estimate_from_panelists(self, audience_key, panelists):
return create_virtual_audience_with_aggregate_method(
'%s-%s' % (self.seed, audience_key), panelists)
###Output
_____no_output_____
###Markdown
VID nativeThe VID native approach works more similar to how digital reach is estimated. The set of virtual people is assigned to each event. Reach of the set of eventsis estimated as the total number of unique virtual people in the union of the sets of virtual people assigned to these events.To do this assignment for a given time interval we assign each virtual person to a panelist that represents this person. Thus for each panelist we end up with a set of virtual people that this panelist represents. The number ofvirtual people assigned to the panelist is to be proportional to the weight of the panelist. Each event of the panelist gets assigned this set of virtual people.Affinity hashing (a particular form of consistent hashing) is used to do thisassignement. Thus if the weights of the panelist are changing from day to daythe minumal necessary number of virtual people get re-assigned.
###Code
import collections
def map_virtual_people_to_panelists(seed):
panelist_of_virtual_person = {}
for bucket, people in people_map.items():
eligible_panelists = panelists_of_bucket[bucket]
for person in people:
panelist_of_virtual_person[person] = affinity_hash(
eligible_panelists, seed, person)
return panelist_of_virtual_person
def map_panelists_to_virtual_people(seed):
vid_to_panelist = map_virtual_people_to_panelists(seed)
panelist_to_virtual_people = collections.defaultdict(list)
for vid, panelist in vid_to_panelist.items():
panelist_to_virtual_people[panelist].append(vid)
return panelist_to_virtual_people
def create_virtual_audience_with_native_method(
panelist_to_virtual_people, panelists):
result = []
for p in set(panelists):
result.extend(panelist_to_virtual_people[p])
return result
class NativeVirtualAudienceEstimator(object):
def __init__(self, seed):
self.panelist_to_vids = map_panelists_to_virtual_people(seed)
def estimate(self, panelists):
return create_virtual_audience_with_native_method(
self.panelist_to_vids, panelists)
###Output
_____no_output_____
###Markdown
SimulationWe run simulations, verifying that the Aggregate and Native methods are estimating the reach and demographic composition accurately.The following random process is used to generate the audience:Campaign parameter `size` (roughly correspondning to the total fraction of the population that will be reached) is sampled from uniform [0,1] distribution.Fraction of each demo bucket reached is sampled from a Beta distribution with expectation equal to `size`.This process leads to campaigns of different sizes and different demographic skews.
###Code
import numpy
from scipy import stats
from matplotlib import pyplot
def simulate_campaign_bucket_reach(campaign_key):
size = float_hash_all("SimulateCampaign", campaign_key)
result = {}
for bucket in people_map:
h = float_hash_all("SCBR", campaign_key, bucket)
bucket_penetration = stats.beta.ppf(h, 5 * size, 5 * (1 - size))
result[bucket] = bucket_penetration
return result
def simulate_campaign_panel_reach(campaign_key):
reached = []
bucket_reach = simulate_campaign_bucket_reach(campaign_key)
for bucket, panelists in panelists_of_bucket.items():
for p in panelists:
if float_hash_all("SCPR", campaign_key, p) < bucket_reach[bucket]:
reached.append(p)
return reached
###Output
_____no_output_____
###Markdown
ReachFirst scattercharts compare the methods to the true reach and to the fraction of the number of panelists reached.We observe that Aggregate and Native methods produce close results. They are closer to each other than Panel method to the Truth. Note that this is the case because we used simple panelist counting as the basis estimate for the aggregate method. The advantage of aggregate method is that aribtrary reachestimation methodology can be used as the basis and it is not necessarily reproducible via the Native method.
###Code
import pandas
from pandas.plotting import scatter_matrix
num_campaigns = 1000
campaign_true_reach = {
"campaign-%d" % i: simulate_campaign_bucket_reach("campaign-%d" % i)
for i in range(num_campaigns)
}
campaign_panel_reach = {
"campaign-%d" % i: simulate_campaign_panel_reach("campaign-%d" % i)
for i in range(num_campaigns)
}
aggregate_reach = []
native_reach = []
true_reach = []
simple_panel_reach = []
native_estimator = NativeVirtualAudienceEstimator(42)
aggregate_estimator = AggregateVirtualAudienceEstimator(42)
for campaign, panelists in campaign_panel_reach.items():
true_reach.append(sum(campaign_true_reach[campaign].values()))
native_reach.append(len(native_estimator.estimate(panelists)))
aggregate_reach.append(
len(aggregate_estimator.estimate_from_panelists(campaign, panelists)))
simple_panel_reach.append(
len(panelists)
)
df = pandas.DataFrame({'true_reach': true_reach,
'simple_panel_reach': simple_panel_reach,
'aggregate_reach': aggregate_reach,
'native_reach': native_reach})
_ = scatter_matrix(df, figsize=(10, 10), hist_kwds={'bins':30})
###Output
_____no_output_____
###Markdown
Demographic compositionNext we compare the methods for estimation of the fraction of reach that belongs to the under 60 bucket, which happens to be on average half of the reached audience.
###Code
aggregate_under_60 = []
native_under_60 = []
true_under_60 = []
simple_panel_under_60 = []
for campaign, panelists in campaign_panel_reach.items():
if len(panelists) < 10:
continue
true_under_60.append(
(campaign_true_reach[campaign][20, 40] +
campaign_true_reach[campaign][40, 60]) /
sum(campaign_true_reach[campaign].values())
)
simple_panel_under_60.append(
len([p for p in panelists if panelist_age[p] < 60])
/ len(panelists))
aggregate_audience = aggregate_estimator.estimate_from_panelists(campaign,
panelists)
aggregate_under_60.append(
len([p for p in aggregate_audience if virtual_person_age_bucket[p] in [(20, 40), (40, 60)]]) /
len(aggregate_audience)
)
native_audiece = native_estimator.estimate(panelists)
native_under_60.append(
len([p for p in native_audiece if virtual_person_age_bucket[p] in [(20, 40), (40, 60)]]) /
len(native_audiece)
)
df = pandas.DataFrame({'true_under_60': true_under_60,
'simple_panel_under_60': simple_panel_under_60,
'aggregate_under_60': aggregate_under_60,
'native_under_60': native_under_60})
_ = scatter_matrix(df, figsize=(10, 10), hist_kwds={'bins':30})
###Output
_____no_output_____
|
cn/.ipynb_checkpoints/sicp-1-33-checkpoint.ipynb
|
###Markdown
SICP 习题 (1.33)解题总结 SICP 习题 1.33将之前抽象的accumulate过程继续往上拔,要求我们定义一个带过滤器的accumulate过程,在这个过程中加多一个参数,这个参数是另一个过程,用来做过滤器。比如我们调用 (filtered-accumulate 奇数?+ 0 my-self 1 next-int 100)就是列出1到100的数,对每个数调用(奇数? n),如果结果为真就将这个数加入到累积结果中,如果结果位假就忽略这个数,不将它加入到累积结果中。明白了这个要求后写代码就比较简单了,我写的代码如下:
###Code
(define (filtered-accumulate filter combiner null-value term a next b)
(if (> a b)
null-value
(if (filter a)
(combiner (term a)
(filtered-accumulate filter combiner null-value term (next a) next b))
(combiner null-value
(filtered-accumulate filter combiner null-value term (next a) next b))
)))
###Output
_____no_output_____
###Markdown
其中最关键的就是对(filter a)的调用,如果(filter a)为真,就进行累积,如果(filter a)为假,就使用null-value进行累积。按我们的定义,对null-value进行累积相当于是没有累积。同时,我惯性地实现了迭代版的filtered-accumulate:
###Code
(define (filtered-accumulate-iter filter combiner null-value term a next b )
(define (iter a result)
(if (> a b)
result
(if (filter a)
(iter (next a) (combiner (term a) result))
(iter (next a) (combiner null-value result)))))
(iter a null-value))
###Output
_____no_output_____
###Markdown
为了测试,还是先定义term函数和next函数:
###Code
(define (square x) (* x x))
(define (increase x) (+ x 1))
###Output
_____no_output_____
###Markdown
filter函数就用开始的时候说的(?奇数)这个吧,对应scheme的函数是 (odd?)
###Code
(odd? 10)
(filtered-accumulate odd? * 1 square 1 increase 7)
(filtered-accumulate-iter odd? * 1 square 1 increase 7 )
###Output
_____no_output_____
###Markdown
后面题目要求我们通过filtered-accumulate求a到b的素数和,调用方式如下: (filtered-accumulate prime? + 0 my-self 1 increase 10)当然,我们还需要先把之前练习里实现的prime?函数拷贝过来:
###Code
(define (smallest-divisor n)
(find-divisor n 2))
(define (find-divisor n test-divisor)
(cond ((> (square test-divisor) n) n)
((divides? test-divisor n) test-divisor)
(else (find-divisor n (+ test-divisor 1)))))
(define (divides? a b)
(= (remainder b a) 0))
(define (square x)
(* x x))
(define (prime? n)
(= n (smallest-divisor n)))
(prime? 101)
(define (my-self x) x)
(filtered-accumulate prime? + 0 my-self 1 increase 10)
###Output
_____no_output_____
###Markdown
最后题目还要求我们通过filtered-accumulate求所有小于n的与n互素的正整数的乘积,定义的过程如下:
###Code
(define (relatively-prime-accumulate n)
(filtered-accumulate (lambda (i) (= (gcd i n) 1))
*
1
my-self
1
increase
(- n 1)) )
###Output
_____no_output_____
###Markdown
然后在notebook里运行的时候悲催地发现Calysto Scheme没有实现gcd,于是手工实现一把
###Code
(define (gcd a b)
(if (= b 0)
a
(gcd b (remainder a b))))
(gcd 10 8)
(relatively-prime-accumulate 10)
###Output
_____no_output_____
|
kaggle-digit-recognizer/kaggle-digit-recognizer.ipynb
|
###Markdown
Load data Download filesDownload the datasets using the kaggle [api](https://github.com/Kaggle/kaggle-api):
###Code
api = KaggleApi()
api.authenticate()
api.competition_download_files(COMPETITION_NAME)
###Output
_____no_output_____
###Markdown
Load data
###Code
train = np.loadtxt('train.csv', delimiter=',', skiprows=1)
train_images = train[...,1:]
train_labels = train[...,0]
test_images = np.loadtxt('test.csv', delimiter=',', skiprows=1)
train_labels
###Output
_____no_output_____
###Markdown
Explore data
###Code
train_images.shape
len(train_labels)
test_images.shape
###Output
_____no_output_____
###Markdown
Preprocess dataPixels fall in the range of 0..255.
###Code
plt.figure()
plt.imshow(np.reshape(train_images[0], (28,28)))
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
This should be rescaled to a range 0..1 before feeding it into the neural network
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
Display the first 25 images from the train set with their class names to verify that the data is correct.
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(np.reshape(train_images[i], (28,28)), cmap=plt.cm.binary)
plt.xlabel(train_labels[i])
plt.show()
###Output
_____no_output_____
###Markdown
Build modelNow build the model by defining its layers and compiling the model. Setup the layers
###Code
model = keras.Sequential([
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
###Output
_____no_output_____
###Markdown
Compile the model
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
WARNING:tensorflow:From C:\Users\Hugo\.conda\envs\tf-keras\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
###Markdown
TrainFeed the training data in the model to train it.
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
Epoch 1/5
42000/42000 [==============================] - ETA: 9:04 - loss: 2.3788 - acc: 0.031 - ETA: 43s - loss: 1.9372 - acc: 0.419 - ETA: 25s - loss: 1.5996 - acc: 0.57 - ETA: 19s - loss: 1.3866 - acc: 0.63 - ETA: 16s - loss: 1.2331 - acc: 0.67 - ETA: 15s - loss: 1.1713 - acc: 0.69 - ETA: 15s - loss: 1.1063 - acc: 0.71 - ETA: 14s - loss: 1.0507 - acc: 0.72 - ETA: 13s - loss: 0.9948 - acc: 0.73 - ETA: 13s - loss: 0.9538 - acc: 0.75 - ETA: 12s - loss: 0.8805 - acc: 0.76 - ETA: 11s - loss: 0.8189 - acc: 0.78 - ETA: 10s - loss: 0.7883 - acc: 0.79 - ETA: 9s - loss: 0.7476 - acc: 0.8016 - ETA: 9s - loss: 0.7125 - acc: 0.811 - ETA: 8s - loss: 0.6756 - acc: 0.820 - ETA: 8s - loss: 0.6518 - acc: 0.826 - ETA: 7s - loss: 0.6315 - acc: 0.830 - ETA: 7s - loss: 0.6095 - acc: 0.835 - ETA: 7s - loss: 0.5970 - acc: 0.837 - ETA: 6s - loss: 0.5833 - acc: 0.841 - ETA: 6s - loss: 0.5725 - acc: 0.843 - ETA: 6s - loss: 0.5579 - acc: 0.847 - ETA: 6s - loss: 0.5455 - acc: 0.850 - ETA: 6s - loss: 0.5349 - acc: 0.853 - ETA: 5s - loss: 0.5253 - acc: 0.855 - ETA: 5s - loss: 0.5147 - acc: 0.858 - ETA: 5s - loss: 0.5058 - acc: 0.860 - ETA: 5s - loss: 0.4992 - acc: 0.862 - ETA: 5s - loss: 0.4910 - acc: 0.864 - ETA: 5s - loss: 0.4828 - acc: 0.867 - ETA: 5s - loss: 0.4721 - acc: 0.869 - ETA: 4s - loss: 0.4677 - acc: 0.870 - ETA: 4s - loss: 0.4625 - acc: 0.872 - ETA: 4s - loss: 0.4539 - acc: 0.874 - ETA: 4s - loss: 0.4472 - acc: 0.876 - ETA: 4s - loss: 0.4427 - acc: 0.877 - ETA: 4s - loss: 0.4395 - acc: 0.877 - ETA: 4s - loss: 0.4331 - acc: 0.879 - ETA: 4s - loss: 0.4284 - acc: 0.880 - ETA: 4s - loss: 0.4257 - acc: 0.881 - ETA: 4s - loss: 0.4229 - acc: 0.882 - ETA: 3s - loss: 0.4182 - acc: 0.883 - ETA: 4s - loss: 0.4181 - acc: 0.883 - ETA: 4s - loss: 0.4175 - acc: 0.883 - ETA: 4s - loss: 0.4161 - acc: 0.884 - ETA: 4s - loss: 0.4152 - acc: 0.884 - ETA: 4s - loss: 0.4139 - acc: 0.885 - ETA: 4s - loss: 0.4107 - acc: 0.885 - ETA: 4s - loss: 0.4086 - acc: 0.886 - ETA: 4s - loss: 0.4068 - acc: 0.886 - ETA: 4s - loss: 0.4038 - acc: 0.887 - ETA: 3s - loss: 0.4004 - acc: 0.888 - ETA: 3s - loss: 0.3975 - acc: 0.889 - ETA: 3s - loss: 0.3974 - acc: 0.889 - ETA: 4s - loss: 0.3963 - acc: 0.889 - ETA: 3s - loss: 0.3948 - acc: 0.890 - ETA: 3s - loss: 0.3922 - acc: 0.890 - ETA: 3s - loss: 0.3901 - acc: 0.891 - ETA: 3s - loss: 0.3887 - acc: 0.891 - ETA: 3s - loss: 0.3859 - acc: 0.892 - ETA: 3s - loss: 0.3837 - acc: 0.893 - ETA: 3s - loss: 0.3823 - acc: 0.894 - ETA: 3s - loss: 0.3796 - acc: 0.894 - ETA: 3s - loss: 0.3776 - acc: 0.895 - ETA: 3s - loss: 0.3756 - acc: 0.895 - ETA: 3s - loss: 0.3728 - acc: 0.896 - ETA: 3s - loss: 0.3703 - acc: 0.897 - ETA: 3s - loss: 0.3696 - acc: 0.897 - ETA: 3s - loss: 0.3678 - acc: 0.897 - ETA: 3s - loss: 0.3668 - acc: 0.897 - ETA: 3s - loss: 0.3650 - acc: 0.897 - ETA: 2s - loss: 0.3625 - acc: 0.898 - ETA: 2s - loss: 0.3600 - acc: 0.899 - ETA: 2s - loss: 0.3582 - acc: 0.899 - ETA: 2s - loss: 0.3567 - acc: 0.900 - ETA: 2s - loss: 0.3539 - acc: 0.900 - ETA: 2s - loss: 0.3514 - acc: 0.901 - ETA: 2s - loss: 0.3500 - acc: 0.901 - ETA: 2s - loss: 0.3480 - acc: 0.902 - ETA: 2s - loss: 0.3458 - acc: 0.903 - ETA: 2s - loss: 0.3443 - acc: 0.903 - ETA: 2s - loss: 0.3423 - acc: 0.904 - ETA: 2s - loss: 0.3406 - acc: 0.904 - ETA: 2s - loss: 0.3392 - acc: 0.905 - ETA: 2s - loss: 0.3384 - acc: 0.905 - ETA: 2s - loss: 0.3371 - acc: 0.905 - ETA: 2s - loss: 0.3351 - acc: 0.906 - ETA: 1s - loss: 0.3337 - acc: 0.906 - ETA: 1s - loss: 0.3318 - acc: 0.907 - ETA: 1s - loss: 0.3307 - acc: 0.907 - ETA: 1s - loss: 0.3285 - acc: 0.908 - ETA: 1s - loss: 0.3274 - acc: 0.908 - ETA: 1s - loss: 0.3262 - acc: 0.908 - ETA: 1s - loss: 0.3247 - acc: 0.909 - ETA: 1s - loss: 0.3233 - acc: 0.909 - ETA: 1s - loss: 0.3220 - acc: 0.909 - ETA: 1s - loss: 0.3210 - acc: 0.909 - ETA: 1s - loss: 0.3199 - acc: 0.910 - ETA: 1s - loss: 0.3181 - acc: 0.910 - ETA: 1s - loss: 0.3159 - acc: 0.911 - ETA: 1s - loss: 0.3148 - acc: 0.911 - ETA: 1s - loss: 0.3139 - acc: 0.912 - ETA: 1s - loss: 0.3129 - acc: 0.912 - ETA: 0s - loss: 0.3122 - acc: 0.912 - ETA: 0s - loss: 0.3115 - acc: 0.912 - ETA: 0s - loss: 0.3102 - acc: 0.912 - ETA: 0s - loss: 0.3088 - acc: 0.913 - ETA: 0s - loss: 0.3073 - acc: 0.913 - ETA: 0s - loss: 0.3063 - acc: 0.914 - ETA: 0s - loss: 0.3045 - acc: 0.914 - ETA: 0s - loss: 0.3030 - acc: 0.915 - ETA: 0s - loss: 0.3019 - acc: 0.915 - ETA: 0s - loss: 0.3003 - acc: 0.915 - ETA: 0s - loss: 0.2985 - acc: 0.916 - ETA: 0s - loss: 0.2969 - acc: 0.916 - ETA: 0s - loss: 0.2961 - acc: 0.916 - ETA: 0s - loss: 0.2949 - acc: 0.917 - ETA: 0s - loss: 0.2935 - acc: 0.917 - ETA: 0s - loss: 0.2920 - acc: 0.917 - ETA: 0s - loss: 0.2908 - acc: 0.918 - 7s 160us/sample - loss: 0.2908 - acc: 0.9183
Epoch 2/5
###Markdown
EvaluateNo evaluation data available PredictWith the model trained, we can use it to make predictions about some images.
###Code
predictions = model.predict(test_images)
predictions[0]
np.argmax(predictions[0])
def plot_image(i, predictions_array, img):
predictions_array, img = predictions_array[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(np.reshape(img, (28,28)), cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
color = 'blue'
plt.xlabel("{} {:2.0f}%".format(predicted_label,
100*np.max(predictions_array)),
color=color)
def plot_value_array(i, predictions_array):
predictions_array = predictions_array[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
# thisplot[true_label].set_color('blue')
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions)
plt.show()
# Plot the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions)
plt.show()
###Output
_____no_output_____
###Markdown
SubmitFinally, make sure that the predictions are saved in a csv file in the correct format```ImageId,Label1,32,73,8 (27997 more lines)```
###Code
def labels(x):
return round(np.argmax(x), 0)
predict_labels = np.apply_along_axis(labels, axis=1, arr=predictions)
TIMESTAMP = time.strftime('%Y%m%d-%H%M%S')
FILE_NAME = 'submission-{}.csv'.format(TIMESTAMP)
np.savetxt(FILE_NAME, np.dstack((np.arange(1, len(predict_labels) + 1), predict_labels))[0],
fmt='%1.0f', delimiter=',', header='ImageId,Label', comments='')
print(FILE_NAME)
api.competition_submit(FILE_NAME, 'Submission {}'.format(TIMESTAMP), COMPETITION_NAME)
###Output
100%|███████████████████████████████████████████████████████████████████████████████| 235k/235k [00:03<00:00, 66.9kB/s]
|
notebooks/examples/bar_chart_with_highlight.ipynb
|
###Markdown
Bar Chart with Highlight------------------------This example shows a Bar chart that highlights values beyond a threshold.
###Code
import altair as alt
alt.data_transformers.enable('json')
import pandas as pd
data = pd.DataFrame({"Day": range(1, 16),
"Value": [54.8, 112.1, 63.6, 37.6, 79.7, 137.9, 120.1, 103.3,
394.8, 199.5, 72.3, 51.1, 112.0, 174.5, 130.5]})
data2 = pd.DataFrame([{"ThresholdValue": 300, "Threshold": "hazardous"}])
bar1 = alt.Chart(data).mark_bar().encode(
x='Day:O',
y='Value:Q'
)
bar2 = alt.Chart(data).mark_bar(color="#e45755").encode(
x='Day:O',
y='baseline:Q',
y2='Value:Q'
).transform_filter(
"datum.Value >= 300"
).transform_calculate(
"baseline", "300"
)
rule = alt.Chart(data2).mark_rule().encode(
y='ThresholdValue:Q'
)
text = alt.Chart(data2).mark_text(
align='left', dx=215, dy=-5
).encode(
alt.Y('ThresholdValue:Q', axis=alt.Axis(title='PM2.5 Value')),
text=alt.value('hazardous')
)
bar1 + text + bar2 + rule
###Output
_____no_output_____
|
Sentimal_Analysis.ipynb
|
###Markdown
Load Data
###Code
from keras.datasets import imdb
import pandas as pd
import numpy as np
import string
import re
!mkdir ~/.kaggle
!cp kaggle.json ~/.kaggle
!chmod 600 ~/.kaggle/kaggle.json
!kaggle datasets download -d lakshmi25npathi/imdb-dataset-of-50k-movie-reviews
!unzip /content/imdb-dataset-of-50k-movie-reviews.zip
dataset = pd.read_csv("/content/IMDB Dataset.csv")
dataset.head()
dataset.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 50000 entries, 0 to 49999
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 review 50000 non-null object
1 sentiment 50000 non-null object
dtypes: object(2)
memory usage: 781.4+ KB
###Markdown
Data Preprocessing
###Code
dataset["review"] = dataset["review"].str.lower()
dataset["review"][0:]
bidirdata = [ "a", "about", "above", "after", "again", "against", "all", "am", "an", "and", "any", "are", "as", "at", "be", "because",
"been", "before", "being", "below", "between", "both", "but", "by", "could", "did", "do", "does", "doing", "down", "during",
"each", "few", "for", "from", "further", "had", "has", "have", "having", "he", "he'd", "he'll", "he's", "her", "here",
"here's", "hers", "herself", "him", "himself", "his", "how", "how's", "i", "i'd", "i'll", "i'm", "i've", "if", "in", "into",
"is", "it", "it's", "its", "itself", "let's", "me", "more", "most", "my", "myself", "nor", "of", "on", "once", "only", "or",
"other", "ought", "our", "ours", "ourselves", "out", "over", "own", "same", "she", "she'd", "she'll", "she's", "should",
"so", "some", "such", "than", "that", "that's", "the", "their", "theirs", "them", "themselves", "then", "there", "there's",
"these", "they", "they'd", "they'll", "they're", "they've", "this", "those", "through", "to", "too", "under", "until", "up",
"very", "was", "we", "we'd", "we'll", "we're", "we've", "were", "what", "what's", "when", "when's", "where", "where's",
"which", "while", "who", "who's", "whom", "why", "why's", "with", "would", "you", "you'd", "you'll", "you're", "you've",
"your", "yours", "yourself", "yourselves" ]
dataset["new reviews"] = dataset["review"].apply(lambda x : " ".join([word for word in x.split() if word not in bidirdata]))
def remove_tags(string):
string = re.sub("<.*?>" , "" , string)
return string
dataset["new reviews"] = dataset["new reviews"].apply(lambda x: remove_tags(x))
print(dataset["new reviews"][0])
print(dataset["review"][0])
dataset["new reviews"] = dataset["new reviews"].str.replace('[{}]'.format(string.punctuation), ' ')
dataset.drop('review', inplace=True, axis=1)
dataset.head()
def numurize(sent):
if sent == "positive":
return 1
else:
return 0
dataset["sentiment"] = dataset["sentiment"].apply(lambda x : numurize(x))
dataset.info()
y = np.array(dataset["sentiment"])
x = list(dataset["new reviews"])
from sklearn.model_selection import train_test_split
x_train , x_test , y_train , y_test = train_test_split(x , y , test_size = 0.2)
print("Len of reviews in train =" , len(x_train))
print("Len of reviews in test =" , len(x_test))
from keras.preprocessing.text import Tokenizer
tokinezer = Tokenizer(num_words = 5000)
tokinezer.fit_on_texts(x_train)
words_to_index = tokinezer.word_index
len(words_to_index)
!kaggle datasets download -d watts2/glove6b50dtxt
!unzip /content/glove6b50dtxt.zip
def read_glove_vector(glove_vec):
with open(glove_vec, 'r', encoding='UTF-8') as f:
words = set()
word_to_vec_map = {}
for line in f:
w_line = line.split()
curr_word = w_line[0]
word_to_vec_map[curr_word] = np.array(w_line[1:], dtype=np.float64)
return word_to_vec_map
word_to_vec_map = read_glove_vector('/content/glove.6B.50d.txt')
vocuablary_len = len(words_to_index)
embeded_len = word_to_vec_map["love"].shape[0]
max_len = 150
print("Number of Words =", vocuablary_len)
print("Number of Reviews =" , len(x_train))
print("Length of Embeded Vector =" , embeded_len )
print("Max Length of Words In a Review =" , max_len)
###Output
Number of Words = 95660
Number of Reviews = 40000
Length of Embeded Vector = 50
Max Length of Words In a Review = 150
###Markdown
Build Model
###Code
from keras.models import Model
from keras.layers import Dense , Embedding , LSTM , Input , Dropout
#initialize embeded matrix
embed_matrix = np.zeros((vocuablary_len , embeded_len))
for word , index in words_to_index.items():
embed_vect = word_to_vec_map.get(word)
if embed_vect is not None:
embed_matrix[index -1 , :] = embed_vect
embed_matrix.shape
inp = Input((max_len,))
emb = Embedding(vocuablary_len , output_dim = embed_vector_len , input_length = max_len ,
weights = [embed_matrix] , trainable = False)(inp)
output = LSTM(128 , return_sequences = True)(emb)
output = Dropout(0.6)(output)
output = LSTM(128 , return_sequences = True)(output)
output = Dropout(0.6)(output)
output = LSTM(128)(output)
output = Dense(1 , activation = "sigmoid")(output)
model = Model(inputs = inp , outputs = output )
model.summary()
from keras.preprocessing.sequence import pad_sequences
X_train_indices = tokinezer.texts_to_sequences(x_train)
X_train_indices = pad_sequences(X_train_indices, maxlen= max_len, padding='post')
X_train_indices.shape
model.compile(optimizer = "adam" , loss = "binary_crossentropy" , metrics = ["accuracy"])
model.fit(X_train_indices , y_train , batch_size = 64 , epochs = 20)
X_test_indices = tokinezer.texts_to_sequences(x_test)
X_test_indices = pad_sequences(X_test_indices, maxlen=max_len, padding='post')
model.evaluate(X_test_indices , y_test)
preds = model.predict(X_test_indices)
n = np.random.randint(0,9999)
print(x_test[n])
if preds[n] > 0.5:
print('predicted sentiment : positive')
else:
print('precicted sentiment : negative')
if (y_test[n] == 1):
print('correct sentiment : positive')
else:
print('correct sentiment : negative')
###Output
put dvd player hit play will experience brief moment silence see black screen laser guided correct starting point center disc cherish moment make sure tylenol something preferably pm s can fall asleep going massive headache movie starts starring bunch big breasted girls opening actually made chuckle bit thought good time sure opening sequence wee bit awkward jokes fell flat seemed like going scream ripoff by way sole chuckle julie strain s final comment scene knew trouble opening sequence terrible rock song terrible rock song looked dvd chapter titles saw things said topless backyard better sex knew selling point movie going be and sad truth good thing movie attractive cast that sadly routine slasher film throws innovative concept murder clubs ends fake anyway so whole movie points another direction try confusing huge mystery just adds not interesting leaves feeling like don t care characters mean main character movie revealed murdered innocent woman can really feel sympathy towards fear life scream influence prevalent throughout ghost face killer really terrible jokes also treated scenes main character talking mom dad lloyd kaufman cool part movie abortion something uh yeah isn t so bad it s good movie just bad someone compared troma film but know film comes full moon or offshoot film proves horrible not horrible troma sense seen many troma films can honestly say offer something anything can walk away tell friends later however film pretty much nothing enjoyable it beware
precicted sentiment : negative
correct sentiment : negative
|
chapter_03_linear-networks/0_linear-regression.ipynb
|
###Markdown
线性回归我们对计算进行矢量化,从而利用线性代数库,而不是在Python中编写开销高昂的for循环
###Code
%matplotlib inline
import math
import time
import numpy as np
import mindspore
import mindspore.numpy as mnp
import sys
sys.path.append("..")
import d2l.mindspore as d2l
###Output
_____no_output_____
###Markdown
对向量相加的两种方法
###Code
n = 10000
a = mnp.ones(n)
b = mnp.ones(n)
###Output
_____no_output_____
###Markdown
让我们定义一个计时器
###Code
class Timer:
"""记录多次运行时间。"""
def __init__(self):
self.times = []
self.start()
def start(self):
"""启动计时器。"""
self.tik = time.time()
def stop(self):
"""停止计时器并将时间记录在列表中。"""
self.times.append(time.time() - self.tik)
return self.times[-1]
def avg(self):
"""返回平均时间。"""
return sum(self.times) / len(self.times)
def sum(self):
"""返回时间总和。"""
return sum(self.times)
def cumsum(self):
"""返回累计时间。"""
return np.array(self.times).cumsum().tolist()
###Output
_____no_output_____
###Markdown
我们使用for循环,每次执行一位的加法
###Code
c = mnp.zeros(n)
timer = Timer()
for i in range(n):
c[i] = a[i] + b[i]
f'{timer.stop():.5f} sec'
###Output
[WARNING] KERNEL(2938032,7ff0bd39c740,python):2021-10-29-08:16:43.100.101 [mindspore/ccsrc/backend/kernel_compiler/gpu/gpu_kernel_factory.cc:96] ReducePrecision] Kernel [TensorScatterUpdate] does not support int64, cast input 1 to int32.
[WARNING] PRE_ACT(2938032,7ff0bd39c740,python):2021-10-29-08:16:43.100.165 [mindspore/ccsrc/backend/optimizer/gpu/reduce_precision_fusion.cc:83] Run] Reduce precision for [TensorScatterUpdate] input 1
[WARNING] KERNEL(2938032,7ff0bd39c740,python):2021-10-29-08:16:43.106.712 [mindspore/ccsrc/backend/kernel_compiler/gpu/gpu_kernel_factory.cc:96] ReducePrecision] Kernel [TensorScatterUpdate] does not support int64, cast input 1 to int32.
[WARNING] PRE_ACT(2938032,7ff0bd39c740,python):2021-10-29-08:16:43.106.856 [mindspore/ccsrc/backend/optimizer/gpu/reduce_precision_fusion.cc:83] Run] Reduce precision for [TensorScatterUpdate] input 1
###Markdown
或者,我们使用重载的`+`运算符来计算按元素的和
###Code
timer.start()
d = a + b
f'{timer.stop():.5f} sec'
###Output
_____no_output_____
###Markdown
我们定义一个Python函数来计算正态分布
###Code
def normal(x, mu, sigma):
p = 1 / math.sqrt(2 * math.pi * sigma**2)
return p * np.exp(-0.5 / sigma**2 * (x - mu)**2)
###Output
_____no_output_____
###Markdown
可视化正态分布
###Code
x = np.arange(-7, 7, 0.01)
params = [(0, 1), (0, 2), (3, 1)]
d2l.plot(x, [normal(x, mu, sigma) for mu, sigma in params], xlabel='x',
ylabel='p(x)', figsize=(4.5, 2.5),
legend=[f'mean {mu}, std {sigma}' for mu, sigma in params])
###Output
_____no_output_____
|
jupyter notebooks/What is the mechanism for imatinib's effect on asthma.ipynb
|
###Markdown
**Hint** module: Find corresponding bio-entity representation used in BioThings Explorer based on user input (could be any database IDs, symbols, names)**FindConnection** module: Find intermediate bio-entities which connects user specified input and output Step 1: Find representation of "asthma" and "imatinib" in BTE
###Code
ht = Hint()
# find all potential representations of asthma
asthma_hint = ht.query("asthma")
# select the correct representation of asthma
asthma = asthma_hint['DiseaseOrPhenotypicFeature'][0]
asthma
# find all potential representations of imatinib
imatinib_hint = ht.query("imatinib")
# select the correct representation of imatinib
imatinib = imatinib_hint['ChemicalSubstance'][0]
imatinib
###Output
_____no_output_____
###Markdown
Step 2: Find intermediate nodes connecting imatinib and asthma
###Code
help(FindConnection.__init__)
fc = FindConnection(input_obj=asthma, output_obj=imatinib, intermediate_nodes=['Gene'])
# set verbose=True will display all steps which BTE takes to find the connection
fc.connect(verbose=True)
###Output
==========
========== QUERY PARAMETER SUMMARY ==========
==========
BTE will find paths that join 'asthma' and 'IMATINIB'. Paths will have 1 intermediate node.
Intermediate node #1 will have these type constraints: Gene
==========
========== QUERY #1 -- fetch all Gene entities linked to 'asthma' ==========
==========
==== Step #1: Query path planning ====
Because asthma is of type 'DiseaseOrPhenotypicFeature', BTE will query our meta-KG for APIs that can take 'DiseaseOrPhenotypicFeature' as input
BTE found 3 apis:
API 1. mydisease.info(1 API call)
API 2. semmeddisease(1 API call)
API 3. biolink_disease2gene(1 API call)
==== Step #2: Query path execution ====
NOTE: API requests are dispatched in parallel, so the list of APIs below is ordered by query time.
API 3.1: http://mydisease.info/v1/query (POST "q=C0004096&scopes=mondo.xrefs.umls,disgenet.xrefs.umls&fields=disgenet.genes_related_to_disease&species=human&size=100")
API 2.1: http://pending.biothings.io/semmed/query (POST "q=C0004096&scopes=umls&fields=AFFECTS_reverse.protein,AFFECTS_reverse.gene,ASSOCIATED_WITH.gene,AFFECTS.gene,CAUSES_reverse.gene,AFFECTS.protein&species=human&size=100")
API 1.1: https://api.monarchinitiative.org/api/bioentity/disease/MONDO:0004979/genes?rows=100
==== Step #3: Output normalization ====
API 1.1 biolink_disease2gene: 100 hits
API 3.1 mydisease.info: 99 hits
API 2.1 semmeddisease: 550 hits
After id-to-object translation, BTE retrieved 567 unique objects.
==========
========== QUERY #2 -- fetch all Gene entities linked to 'IMATINIB' ==========
==========
==== Step #1: Query path planning ====
Because IMATINIB is of type 'ChemicalSubstance', BTE will query our meta-KG for APIs that can take 'ChemicalSubstance' as input
BTE found 3 apis:
API 1. dgidb_chemical2gene(1 API call)
API 2. mychem.info(2 API calls)
API 3. semmedgene(6 API calls)
==== Step #2: Query path execution ====
NOTE: API requests are dispatched in parallel, so the list of APIs below is ordered by query time.
API 2.2: http://mychem.info/v1/query (POST "q=DB00619&scopes=drugbank.id&fields=drugbank.targets,drugbank.enzymes&species=human&size=100")
API 2.1: http://mychem.info/v1/query (POST "q=CHEMBL941&scopes=chembl.molecule_chembl_id&fields=drugcentral.bioactivity&species=human&size=100")
API 3.4: https://pending.biothings.io/semmedgene/query (POST "q=C0935989&scopes=associated_with_reverse.chemical_substance.umls&fields=umls&species=human&size=100")
API 3.2: https://pending.biothings.io/semmedgene/query (POST "q=C0935989&scopes=affects_reverse.chemical_substance.umls&fields=umls&species=human&size=100")
API 1.1: http://www.dgidb.org/api/v2/interactions.json?drugs=CHEMBL941
API 3.6: https://pending.biothings.io/semmedgene/query (POST "q=C0935989&scopes=interacts_with_reverse.chemical_substance.umls&fields=umls&species=human&size=100")
API 3.3: https://pending.biothings.io/semmedgene/query (POST "q=C0935989&scopes=associated_with.chemical_substance.umls&fields=umls&species=human&size=100")
API 3.1: https://pending.biothings.io/semmedgene/query (POST "q=C0935989&scopes=affects.chemical_substance.umls&fields=umls&species=human&size=100")
API 3.5: https://pending.biothings.io/semmedgene/query (POST "q=C0935989&scopes=interacts_with.chemical_substance.umls&fields=umls&species=human&size=100")
==== Step #3: Output normalization ====
API 1.1 dgidb_chemical2gene: 34 hits
API 2.1 mychem.info: 77 hits
API 2.2 mychem.info: 18 hits
API 3.1 semmedgene: No hits
API 3.2 semmedgene: No hits
API 3.3 semmedgene: No hits
API 3.4 semmedgene: No hits
API 3.5 semmedgene: 100 hits
API 3.6 semmedgene: 100 hits
After id-to-object translation, BTE retrieved 245 unique objects.
==========
========== Final assembly of results ==========
==========
BTE found 34 unique intermediate nodes connecting 'asthma' and 'IMATINIB'
###Markdown
Step 3: Display and Filter results
###Code
df = fc.display_table_view()
df
###Output
_____no_output_____
###Markdown
Filter by the predicate of the first query
###Code
df[df['pred1'] == 'causedBy']
###Output
_____no_output_____
###Markdown
Filter by the predicate of the second query
###Code
df[df['pred2'] == 'target']
###Output
_____no_output_____
###Markdown
Check the detailed infomation regarding a specific edge
###Code
fc.display_edge_info('LCK', 'IMATINIB')
###Output
_____no_output_____
|
ch04/Training Models.ipynb
|
###Markdown
Training ModelsThis notebook is dedicated to chapter 4 of the book.Exploring how models can be trained Linear Regression Model DefinitionWe can define the linear model as follows:$$\hat{y}=\theta_{0} + \theta_{1}x_{1}+\theta_{2}x_{2}+\dots+\theta_{n}x_{n}$$Where:* $\hat{y}$ is the predicted value* $n$ is the number of features* $x_{i}$ is the $x^{ith}$ feature value (i.e., the instance attribute values)* $\theta_{j}$ is the $j^{th}$ model parameter including the bias term $\theta_0$ and the feature weights $\theta_1,\theta_2,\dots,\theta_n$And in vectorized form:$$\hat{y}=h_{\theta}(X)=\theta^T \cdot X$$Where:* $\hat{y}$ is again the predicted value* $\theta$ is the model's *parameter vector*, containing the bias term $\theta_0$, and the feature weights $\theta_1$ to $\theta_n$* $\theta^T$ is the transpose of $\theta$, a row vector instead of a column vector.* $X$ is the instance's *feature vector*, containing $x_0$ to $x_n$ **with $x_0$ always equal to $1$.*** $\theta^TX$ is the dot product of $\theta^T$ and $X$* $h_{\theta}$ is the hypothesis function, using the model parameters $\theta$ MetricsA linear model consists on fitting the equation of a line (the model) on a series of data points that resembles a line. We are going to play with the model parameters, i.e., $\theta_{n}$ till we find a line that best fits the data. For doing this, we need a measure that tell us this. In this case, we can think of this measurement as the distance between each data point and the line we are fitting, the less the difference the better. This metric is called *Mean Squared Error* or *MSE*:$$MSE(X,h_{\theta})=\frac{1}{m}\sum_{i=1}^{m} (\theta^{T} \cdot x^{(i)} - y^{(i)})^2$$This is basically, adding all the differences between the point in the line (predicted value) and the actual value then square it (to deal with negative values) and finaly averaging it depending on the number of samples.This is then a Minimization problem as we want to find the values for $\theta_{n}$ such that minimizes the value of $MSE(\theta)$ The normal equationSo the definition of this problem as a formula would be:$$\hat{\theta}=(X^T \cdot X)^{-1} \cdot X^T \cdot y$$Where:* $\hat{\theta}$ is the value of $\theta$ that minimizes the cost function.* $y$ is the vector of target values containing $y^{(1)}$ to $y^{(m)}$ Visualization
###Code
import numpy as np
np.random.seed(12345)
# Generate 100 points, rand receives the dimensions of the returning vector,
# in this case, 100 rows and 1 column
X = 2 * np.random.rand(100, 1)
# This is basically applying the model 'y = 3x + 4' however, we are adding noise with a random normal distribution to simulate some dispersion in the data points
y = 4 + 3 * X + np.random.randn(100, 1)
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(8, 6))
plt.scatter(X, y)
plt.show()
###Output
_____no_output_____
###Markdown
Now, let's compute $\hat{\theta}$ using the normal function and with help of numpy's linear algebra packages to calculate the inverse of a matrix (-1 exponent) and the dot product.
###Code
X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance
theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T.dot(y))
theta_best
###Output
_____no_output_____
###Markdown
Notice the actual values of the function we used to generate the data (+ Gaussian Noise) and look at what we obtained here, they resemble pretty well the original values, considering the noise off course.Now, let's make predictions using these thetas as model parameters and plot the line and appreciate it graphically
###Code
X_new = np.array([[0], [2]])
X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 =1 to each instance
y_predict = X_new_b.dot(theta_best)
y_predict
plt.figure(figsize=(8, 6))
plt.plot(X_new, y_predict, 'r-')
plt.scatter(X, y)
plt.axis([0, 2, 0, 15]) # X axis from 0 to 2, Y axis from 0 to 15
plt.legend(["Predictions"])
plt.show()
###Output
_____no_output_____
###Markdown
Equivalent code using scikit learn
###Code
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
lin_reg.intercept_, lin_reg.coef_
###Output
_____no_output_____
###Markdown
Notes:This method works good for large number of samples, $O(m)$, but starts to perform poorly when the number of features grows. From $O(n^{2.4})$ to $O(n^3)$ Gradient DescentThis technique consists in taken steps in the direction where the values of the cost functions become smaller. It's like going downhill, you might want to go in the steepest downward direction to get to the bottom.The MSE cost function is like a bowl with a global minima, we can then, pick any point in the function and go in the direction where the value becomes smaller. The amount of each step is called *learning rate*, depending on the value, we can reach the global minima if we wait long enough for small step sizes. Important:If we have features with different scales, the bowl can look enlongated, this means that reaching the minima might take a long time. I this particular case, this is one of the reasons why it's important to work with data under the same scale. Batch Gradient Descent*To implement Gradient Descent, you need to compute the gradient of the cost function with regards to each model parameter $\theta_j$. In other words, you need to calculate how much the cost function will change if you change $\theta_j$ just a bit. **This is called partial derivative*** (Directly from the book)The partial derivative of MAE with respect to $\theta$ is:$$\frac{\partial}{\partial\theta_j} MSE(\theta)=\frac{2}{m}\sum_{i=1}^{m} (\theta^{T} \cdot x^{(i)} - y^{(i)} ) x_j^{(i)}$$Now, we can compute all the partial derivatives, one for each model parameter and this will give us the gradient vector $\bigtriangledown_{\theta}MSE(\theta)$:$$\bigtriangledown_{\theta}MSE(\theta) = \begin{bmatrix}\frac{\partial}{\partial_0} MSE(\theta) \\\frac{\partial}{\partial_1} MSE(\theta) \\\vdots \\\frac{\partial}{\partial_n} MSE(\theta)\end{bmatrix} = \frac{2}{m}X^T \cdot (X \cdot \theta - y)$$ Notice this means we will be calculating the gradient in the whole data set X, this is why this method is very slow for large datasets. However scales well for a great number of features.What the gradient vector tell us, is the direction where the function moves uphill. Since we want to go downhil, we just need to go to the opposite direction. To do this, we must substract $\bigtriangledown_{\theta}MSE(\theta)$ from $\theta$. This is where the learning rate $\eta$ is used. We need to multiply the learning rate by the gradient vector to get the size of the next downhill step.$$\theta^{(next step)}=\theta - \eta \bigtriangledown_{\theta} MSE(\theta)$$Now, let's look at a quick implementation of this algorithm:
###Code
def linear_regression_gd(X, y, eta=0.1, n_iterations=1000):
"""Trains a linear regression model with gradient descent.
Keyword arguments:
X -- features
y -- target values
eta -- the learning rate
n_iterations -- the number of iterations
"""
theta = np.random.rand(2,1) # random initialization, two parameters, theta0 and theta1
m = X.shape[0] # number of samples
for iteration in range(n_iterations):
# applying the partial derivative of MSE, notice X_b.dot(theta) is equal to the predicted value xi
gradients = 2/m * X.T.dot(X.dot(theta) - y)
# here we update theta, the parameters, in the oposite direction of the gradient by the learning rate amount
theta = theta - eta * gradients
return theta
eta = 0.1 # learning rate
n_iterations = 1000
linear_regression_gd(X_b, y, eta, n_iterations)
###Output
_____no_output_____
###Markdown
We obtained the same model parameters as with the normal equation!Gradient descent depends on the hyperparameters, *learning rate* and * of iterations*. The model parameters are sensible to these values, let's see what happens if we use different values.
###Code
def linear_regression_gd_steps(X, y, eta, n_iterations=10):
result = []
for i in range(n_iterations):
result.append(linear_regression_gd(X_b, y, eta, i + 1))
return result
etas = {
0.02: linear_regression_gd_steps(X_b, y, 0.02),
0.1: linear_regression_gd_steps(X_b, y, 0.1),
0.5: linear_regression_gd_steps(X_b, y, 0.5)
}
%matplotlib inline
def plot_iteration_evolution(X_b, y, eta):
interval = int(len(plt.cm.magma.colors)/len(eta[1]))
colors = plt.cm.magma.colors[::interval]
color_iter = iter(colors)
plt.scatter(X, y)
plt.title("$\eta={}$".format(eta[0]), fontsize=15)
plt.xlabel("$x_1$", fontsize=15)
plt.ylabel("$y$", rotation=0, fontsize=15)
plt.axis([0, 2, 0, 15])
for theta in eta[1]:
predictions = X_b.dot(theta)
color = next(color_iter)
plt.plot(X_b, predictions, color=color)
plt.figure(figsize=(20, 6))
for index, eta in enumerate(etas.items()):
plt.subplot(1, 3, index + 1)
plot_iteration_evolution(X_b, y, eta)
plt.show()
###Output
_____no_output_____
###Markdown
We can observe here the evolution of three different learning rates, the lighter the line the higher the number of iterations, (the dark line means is the first iteration).* Notice with $0.02$ within 10 iterations it wasn't able to converge, however is consistently approximating to optimal solution.* Notice with $0.1$ within 10 iterations converves really quick.* Notice with $0.5$ within just 2 iterations not only does not converges, it also fires completely outside the solution. This is an example of a too-large learning rate making big jumps in the bowl and never reaching the bottom. Stochastic Gradient DescentThe main problem with the Batch gradient descent is that it needs to use the whole data set each iteration to compute the gradient, hence for large data sets it behaves poorly or takes long to converge, assuming the dataset will fit in memory.SGD follows the opposite approach, on each iteration, randomly (stochastic) takes a sample, a single one, calculates the gradient and update the parameters. In one side, it deals pretty well with large datasets as it only needs to have a single instance in memory to calculate the gradient, however contrary to BGD, it will approximate to the solution abruptly. You will notice each iteration jumps back & ford instead of a nice step-by-step evolution.Another thing, is that, given its jumpy nature, it may actually be able to find the global minima instead of just reaching a local minima. However, still, because of the same reason, it won't be able to reach the real gloabl minima, just approximate to it. One way of solving this issue, is to modify the *learning rate* on schedule, so it will start with big steps to avoid the local minima, but as soon as it start to find the global minima, reduce the value.
###Code
n_epochs = 50
t0, t1 = 5, 50
m = len(X_b)
def learning_schedule(t):
return t0 / (t + t1)
theta = np.random.rand(2, 1)
steps = []
eta_hist = []
for epoch in range(n_epochs):
for i in range(m):
random_index = np.random.randint(m)
xi = X_b[random_index:random_index+1]
yi = y[random_index:random_index+1]
gradients = 2 * xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(epoch * m + i)
eta_hist.append(eta)
theta = theta - eta * gradients
steps.append(theta)
theta
###Output
_____no_output_____
###Markdown
Notice here the first and last 10 etas used.We can observe they are gradually decreasing over time
###Code
print("First 10 eta: ", eta_hist[:10])
print("Last 10 eta: ", eta_hist[-10:])
###Output
First 10 eta: [0.1, 0.09803921568627451, 0.09615384615384616, 0.09433962264150944, 0.09259259259259259, 0.09090909090909091, 0.08928571428571429, 0.08771929824561403, 0.08620689655172414, 0.0847457627118644]
Last 10 eta: [0.000992063492063492, 0.0009918666931164452, 0.0009916699722332407, 0.00099147332936744, 0.0009912767644726407, 0.0009910802775024777, 0.0009908838684106222, 0.0009906875371507827, 0.0009904912836767037, 0.0009902951079421667]
###Markdown
We approached closely to the solution given by the batch gradient descent, and we only needed 50 iterations over all the dataset instead of 1000. So this method, despite not reaching the actual global minima, tends to converge faster.Now let's observe the evolution of the ten most significant steps.
###Code
plt.figure(figsize=(10, 8))
plt.scatter(X, y)
plt.title("Stochastic Gradient Descent")
plt.xlabel("$x_1$", fontsize=15)
plt.ylabel("$y$", rotation=0, fontsize=15)
plt.axis([0, 2, 0, 15])
for step in steps[::500]:
predictions = X_b.dot(step)
plt.plot(X_b, predictions)
plt.show()
###Output
_____no_output_____
###Markdown
We can observe, it took few steps to approach to a converging point.We can train this with Scikit Learn using ```SGDRegressor```
###Code
from sklearn.linear_model import SGDRegressor
sgd_reg = SGDRegressor(eta0=0.1, max_iter=50)
sgd_reg.fit(X, y.ravel()) # Use ravel to convert the column vector y (100,1) to a 1-D vector (100)
print("Coef: {}, Intercept: {}".format(sgd_reg.coef_, sgd_reg.intercept_))
###Output
Coef: [3.10819672], Intercept: [4.01600025]
###Markdown
Here we can observe we reached convergence. Let's plot it.
###Code
predictions = sgd_reg.predict(X)
plt.figure(figsize=(10, 6))
plt.scatter(X, y)
plt.title("Stochastic Gradient Descent - Scikit Learn")
plt.xlabel("$x_1$", fontsize=15)
plt.ylabel("$y$", rotation=0, fontsize=15)
plt.plot(X, predictions, 'r-')
plt.show()
###Output
_____no_output_____
###Markdown
Mini Batch Gradient DescentThis approach is a combination of the two previous ones, taking advantage of each. Instead of iterating over all the training set (as in Batch GD) and instance by instance (as in SGD) the idea is to iterate over a small random sub set of the training set (called mini-batches) each epoch. This will boost up the learning speed and take advantage of the computing power for matrix compuntations. However is more prone to converge into local minima than SGD.
###Code
from sklearn.base import BaseEstimator, RegressorMixin
class MBGDRegressor(BaseEstimator, RegressorMixin):
def __init__(self, n_epochs, learning_rate='geometric', eta0=0.1, ls_rate=None, mini_batch_size=None):
self.n_epochs = n_epochs
self.eta0 = eta0
self.mini_batch_size = mini_batch_size
self.learning_rate = learning_rate
self.thetas = []
self.intercet_ = 0.
self.coef_ = 0.
self.best_parameters = 0.
if ls_rate:
self.ls_rate = ls_rate
else:
self.ls_rate = eta0 / mini_batch_size
def __lrs(self):
n = 0
while(True):
if self.learning_rate == 'geometric':
yield (self.eta0 * pow(1 / (1 + self.ls_rate), n))
n += 1
elif self.learning_rate == 'constant':
yield self.eta0
else:
raise ValueError('the learning rate mode is not valid. Valid ones: [geometric, constant]')
def fit(self, X, y):
theta = np.random.rand(X.shape[1] + 1, 1)
X = np.c_[np.ones((X.shape[0], 1)), X]
# Normal iteration over all the epochs
for epoch in range(self.n_epochs):
# Take a random 'mini_batch_size'
mb_index = np.random.randint(0, max(X.shape[0] - 1, 1), self.mini_batch_size)
X_mb = X[mb_index]
y_mb = y[mb_index]
gradients = 2/self.mini_batch_size * X_mb.T.dot(X_mb.dot(theta) - y_mb)
lrs = next(self.__lrs())
theta = theta - lrs * gradients
self.thetas.append(theta)
self.intercept_ = self.thetas[-1][0]
self.coef_ = self.thetas[-1][1:].T
self.best_parameters = self.thetas[-1]
def predict(self, X):
X = np.c_[np.ones((X.shape[0], 1)), X]
return X.dot(self.best_parameters)
epochs = 1000
theta = np.random.rand(2, 1)
lr = 0.1
for epoch in range(0, epochs):
gradients = 2/X_b.shape[0] * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - lr * gradients
theta
np.random.seed(12345)
mbgd_reg = MBGDRegressor(n_epochs=10, learning_rate='geometric', eta0=0.1, mini_batch_size=20)
mbgd_reg.fit(X, y)
mbgd_reg.intercept_,mbgd_reg.coef_
len(mbgd_reg.thetas)
predictions = mbgd_reg.predict(X)
plt.figure(figsize=(10, 6))
plt.scatter(X, y)
plt.title("Mini Batch Gradient Descent")
plt.xlabel("$x_1$", fontsize=15)
plt.ylabel("$y$", rotation=0, fontsize=15)
plt.plot(X, predictions, 'r-')
for theta in mbgd_reg.thetas:
predictions = X_b.dot(step)
plt.plot(X_b, predictions)
plt.show()
###Output
_____no_output_____
###Markdown
Notice that with only 10 epochs and with the same learning rate, with a mini batch size of 20 we approach to convergence really close.
###Code
mbgd_steps = np.array(mbgd_reg.thetas).reshape(10, 2)
sgd_steps = np.array(steps).reshape(5000, 2)
bgd_steps = bgd_steps = np.array(linear_regression_gd_steps(X_b, y, 0.1, 1000)).reshape(1000, 2)
plt.figure(figsize=(10, 6))
plt.plot(bgd_steps[:,0], bgd_steps[:,1], 'g-o', label='Batch GD')
plt.plot(mbgd_steps[:,0], mbgd_steps[:,1], 'r-s', label='Mini Batch GD')
plt.plot(sgd_steps[:,0], sgd_steps[:,1], 'b-*', label='Stochastic GD')
plt.legend(loc='upper left', fontsize=14)
plt.xlabel(r'$\theta_0$')
plt.ylabel(r'$\theta_1$')
plt.show()
###Output
_____no_output_____
###Markdown
I'm starting from different values from the book, for both data points and the initial theta values, hence it's normal that the above plot differs from the original. I'm aware this would need to be fixed. However, there is something similar in the patterns of each step. Notice the erratic behavior of the stochastic gradient descent, the short and more concise steps of the mini batches and the smooth ending of the batch gradient descent. Though on the later, we can observe an erratic path at the beginning. Polynomial regressionNot all the data will behave and fit in a straight line. For more complex cases we can fit curve functions to data points by elevating the parameters to some power then fit the linear model.
###Code
np.random.seed(12345)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 0.5 * X**2 + X + 2 + np.random.rand(m, 1)
plt.figure(figsize=(10, 6))
plt.scatter(X, y)
plt.xlabel("$X_1$")
plt.ylabel("$y$")
plt.show()
###Output
_____no_output_____
###Markdown
Notice we deliberately generated cuadratic shape data with some noise. By looking at the plot, we could imagine this is a parabola, i.e. a polynomial of 2nd degree, so now we are going to infer these properties for the model.
###Code
from sklearn.preprocessing import PolynomialFeatures
poly_features = PolynomialFeatures(degree=2, include_bias=False)
X_poly = poly_features.fit_transform(X)
print("Original feature: ", X[0])
print("Original feature + the square of it: ", X_poly[0])
###Output
Original feature: [2.57769656]
Original feature + the square of it: [2.57769656 6.64451954]
###Markdown
Now if we use the suared values to fit a linear regression, we would be able to fit a model
###Code
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
lin_reg.intercept_,lin_reg.coef_
lin_reg.coef_.shape
def plot_polynomial_regression(X, y, model):
min_x = round(X.min())
max_x = round(X.max())
X_new=np.linspace(min_x, max_x, X.shape[0]).reshape(X.shape[0], 1)
X_new_poly = poly_features.transform(X_new)
y_new = model.predict(X_new_poly)
intercept = model.intercept_[0]
coef = model.coef_[0]
plt.figure(figsize=(10, 6))
plt.scatter(X, y)
plt.xlabel("$X_1$")
plt.ylabel("$y$")
plt.plot(X_new, y_new, "r-", linewidth=2, label=r"$y={:.2f}x^2+{:.2f}x+{:.2f}$".format(coef[1], coef[0], intercept))
plt.legend(loc='upper left', fontsize=14)
plt.show()
plot_polynomial_regression(X, y, lin_reg)
###Output
_____no_output_____
###Markdown
This looks good, notice the original function was $0.5x^2 + x + 2 + Gaussian Noise$, and according to this training we obtained: $0.49x^2 + 0.97x + 2.5$ which is pretty close.Here it is necessary to generate another linespace within the same range as we see in the scatter plot to generate a smooth line as the best fit. We could use the original set, but since it contains noise, we would endup with a plot difficult to understand the line. (see plot below) So basically we are going to generate 100 points evenly distributed within the range of the data we observe, this will make that the plot follow a single path, forming the line we are expecting to see.
###Code
n = len(X)
plt.plot(X[:n], lin_reg.predict(X_poly[:n]))
plt.show()
###Output
_____no_output_____
###Markdown
Let's see if we can get the same result using my own implemented linear regression model.
###Code
mbgd_reg = MBGDRegressor(n_epochs=200, learning_rate='geometric', eta0=0.01, mini_batch_size=30)
mbgd_reg.fit(X_poly, y)
mbgd_reg.intercept_,mbgd_reg.coef_
plot_polynomial_regression(X, y, mbgd_reg)
###Output
_____no_output_____
###Markdown
Look! pretty close to what scikit learn obtained.Scikit-Learn: $0.49x^2 + 0.97x + 2.5$ Me: $0.56x^2+0.98x+2.17$Not bad! What if we trained with a high degree polynomial?
###Code
degree = 40
poly_features_hdegree = PolynomialFeatures(degree=degree)
poly_features = PolynomialFeatures(degree=2)
X_hdegree_poly = poly_features_hdegree.fit_transform(X)
X_2_poly = poly_features.fit_transform(X)
mbgd_hdegree_reg = MBGDRegressor(n_epochs=200, learning_rate='geometric', eta0=0.01, mini_batch_size=30)
mbgd_hdegree_reg.fit(X_hdegree_poly, y)
lr_hdegree_reg = LinearRegression()
lr_hdegree_reg.fit(X_hdegree_poly, y)
lr_2_reg = LinearRegression()
lr_2_reg.fit(X_2_poly, y)
lr_reg = LinearRegression()
lr_reg.fit(X, y)
plt.figure(figsize=(12, 8))
plt.scatter(X, y)
plt.xlabel("$X_1$")
plt.ylabel("$y$")
plt.axis([-3, 3, 0, 10])
min_x = round(X.min())
max_x = round(X.max())
X_new=np.linspace(min_x, max_x, X.shape[0]).reshape(X.shape[0], 1)
X_new_2poly = poly_features.transform(X_new)
X_new_hdegreepoly = poly_features_hdegree.transform(X_new)
y_linear = lr_reg.predict(X_new)
y_2degree = lr_2_reg.predict(X_new_2poly)
y_hdegree = lr_hdegree_reg.predict(X_new_hdegreepoly)
plt.plot(X_new, y_linear, 'r-+', label='Linear')
plt.plot(X_new, y_2degree, 'b--', label='2 Degree')
plt.plot(X_new, y_hdegree, 'g-', label='{} Degree'.format(degree))
plt.legend(loc='upper left', fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Look that in higher degree polynomials, despite in some parts trying to reach every point which signals overfitting, the rest of the function does not fit well. In the opposite side, the linear model is underfitting the data. Learning curvesOne way to measure how simple or complex is a model, and evaluate if is overfitting or underfitting the data, is to look at the learning curves. This is done by training the model several times in subsets of the original training set and comparing them with the testing set.
###Code
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
def plot_learning_curves(model, X, y, axis):
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)
train_errors, val_errors = [],[]
for m in range(1, len(X_train)):
# train the model in increasing subsets of the training set
model.fit(X_train[:m], y_train[:m])
# predict in the same subset
y_train_predict = np.nan_to_num(model.predict(X_train[:m]))
# and finally predict in all the validation set
y_val_predict = np.nan_to_num(model.predict(X_val))
train_errors.append(mean_squared_error(y_train_predict, y_train[:m]))
val_errors.append(mean_squared_error(y_val_predict, y_val))
plt.figure(figsize=(12, 8))
plt.title('Learning curves - Training vs Validation')
plt.plot(np.sqrt(train_errors), 'r-+', linewidth=2, label='train')
plt.plot(np.sqrt(val_errors), 'b-', linewidth=2, label='val')
plt.legend(loc='upper right', fontsize=14)
plt.xlabel('Training set size')
plt.ylabel('RMSE')
if axis:
plt.axis(axis)
plt.show()
lin_reg = LinearRegression()
plot_learning_curves(lin_reg, X, y, [0, 80, 0, 5])
mbgd_hdegree_reg = MBGDRegressor(n_epochs=200, learning_rate='geometric', eta0=0.01, mini_batch_size=30)
plot_learning_curves(mbgd_hdegree_reg, X, y, [0, 80, 0, 3])
###Output
_____no_output_____
###Markdown
How to interpret it:* When the model is presented to a single instance, it is able to predict it perfectly, hence the training set starts at zero error.* In the other hand, this very same model performs poorly in the validation set as it has many more points that naturally don't fit the linear data hence a high error.* As we add more instances to the training set, inevitably we will have a higher error in the training set since once more, the data points don't actually perfectly fit a straight line.* But at the same time, the error in the validation set start to decrease, this is because as new samples are presented to the training set, the model is able to generalize more.* At some point, both reaches a plateau around some error value. Depending on how high is this value, you might have fallen in an underfitting model.**Note:** When underfitting, adding more instances won't help, in this case you should aim for a more complex model or better features. In this case, since we are dealing with a polynomial data, the best would be to train a polynomial model instead of linear.Now, let's see what happens when we train a polynomial model.
###Code
plot_learning_curves(lin_reg, X_poly, y, [0, 80, 0, 3])
plot_learning_curves(mbgd_hdegree_reg, X_poly, y, [0, 80, 0, 3])
###Output
_____no_output_____
###Markdown
We have an improvement in the learning curve, the plateau now sits below 1.0, however, despite not so much in this case, if we see a gap between the training and validation curves, in particular with the training being below, this probably mean the model is overfitting, as it is learning all the training data but generalizing poorly, i.e., the validation predictions are not that good.The gap can bee observed when we use an overly complex model, i.e., a higher degree polynomial.
###Code
poly_10_features = PolynomialFeatures(degree=10)
X_10_poly = poly_10_features.fit_transform(X)
plot_learning_curves(lin_reg, X_10_poly, y, [0, 80, 0, 3])
###Output
_____no_output_____
###Markdown
One way to deal with overfitting data is to add more training data. Ridge regressionOne can try to constrain the model parameters to avoid overfitting, e.g., by limiting the degrees of freedom in a polynomial regression. This is called regularization, one option is **Ridge Regression** which consists in adding a *regularization term* equal to $\alpha\sum_{i=1}^n \theta_i^2$ to the cost function. This forces the algorithm to not only fit the data but also keep the model weights as small as possible.This regularization term should only be added to the cost function during training but when evaluating the model you want to evaluate it using the unregularized performance measure.The hyperparameter $\alpha$ controlas how much you want to regularize the model. If $\alpha = 0$ then Ridge regression is just Linear Regression. If $\alpha$ is very large, then all weights end up very close to zero and the result is a flat line going through the data's mean.$$J(\theta)=MSE(\theta) + \alpha\frac{1}{2}\sum_{i=1}^n \theta_i^2$$Notice we do not regularize $\theta_0$ (the sum starts in $i=1$). If we define **w** as the vector of feature weights ($\theta_1$ to $\theta_n$), then the regularization term is simply equal to $\frac{1}{2}(||w||_2)^2$, where $||\cdot||_2$ represents the $l_2$ norm of the weight vector. For Gradient Descent, just add $\alpha w$ to the MSE gradient vector*Ridge Regression closed-form solution*:$$\hat{\theta}=(X^T \cdot X+\alpha A)^{-1} \cdot X^T \cdot y$$
###Code
np.random.seed(12345)
noise_size = 10
X_reg = 2 * np.random.rand(100, 1)
y_linear = 4 + 3*X + (np.random.rand(100, 1) * noise_size)
y_non_linear = 1.5 * X**2 + 3*X + 2 + (np.random.rand(100, 1) * noise_size)
plt.figure(figsize=(15, 6))
plt.subplot(121)
plt.title('Linear data', fontsize=16)
plt.xlabel(r"$X_n$", fontsize=14)
plt.ylabel(r"$y$", fontsize=14, rotation=0)
plt.scatter(X_reg, y_linear, label=r"$y=3x+4+GN$")
plt.legend()
plt.subplot(122)
plt.title('Non linear data', fontsize=16)
plt.xlabel(r"$X_n$", fontsize=14)
plt.ylabel(r"$y$", fontsize=14, rotation=0)
plt.scatter(X_reg, y_non_linear, label=r"$y=1.5x^2+3x+2+GN$")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Now, let's put the Ridge regression to test with these two datasets
###Code
from sklearn.linear_model import Ridge, Lasso, ElasticNet
def train_regularized_models(X, y, linear=True, degree=2, regularization='ridge', alphas=[]):
models = []
if not linear:
poly_transformer = PolynomialFeatures(degree=degree)
X = poly_transformer.fit_transform(X)
def get_model(reg, alpha):
if reg == 'ridge':
return Ridge(alpha=alpha, solver='cholesky')
elif reg == 'lasso':
return Lasso(alpha=alpha)
elif reg == 'enet':
return ElasticNet(alpha=alpha, l1_ratio=0.5)
for alpha in alphas:
model = get_model(regularization, alpha)
model.fit(X, y)
models.append(model)
return models
# here we define the alpha parameters we'll need to pass to Rdige Regression
linear_alphas = [0, 10, 100]
non_linear_alphas = [0, 1*10**-5, 1]
degree=40
poly_transformer = PolynomialFeatures(degree=degree)
X_reg_space = np.linspace(round(X_reg.min()), round(X_reg.max()), X_reg.shape[0]).reshape(X_reg.shape[0], 1)
X_reg_space_poly = poly_transformer.fit_transform(X_reg_space)
linear_models = train_regularized_models(X_reg, y_linear, alphas=linear_alphas)
linear_models_non_linear_data = train_regularized_models(X_reg, y_non_linear, alphas=linear_alphas)
non_linear_models = train_regularized_models(X_reg, y_non_linear, linear=False, degree=degree, alphas=non_linear_alphas)
plt.figure(figsize=(15, 6))
plt.subplot(131)
plt.title('Linear data', fontsize=16)
plt.xlabel(r"$X_n$", fontsize=14)
plt.ylabel(r"$y$", fontsize=14, rotation=0)
plt.scatter(X_reg, y_linear, color='c')
for alpha, model in zip(linear_alphas, linear_models):
predictions = model.predict(X_reg)
plt.plot(X_reg, predictions, label=r"$\alpha={}$".format(alpha))
plt.legend()
plt.subplot(132)
plt.title('Linear model - Non linear data', fontsize=16)
plt.xlabel(r"$X_n$", fontsize=14)
plt.ylabel(r"$y$", fontsize=14, rotation=0)
plt.scatter(X_reg, y_non_linear, color='c')
plt.axis([0, 2, 0, 25])
for alpha, model in zip(non_linear_alphas, linear_models_non_linear_data):
predictions = model.predict(X_reg)
plt.plot(X_reg, predictions, label=r"$\alpha={}$".format(alpha))
plt.legend()
plt.subplot(133)
plt.title('Non linear model - Non linear data', fontsize=16)
plt.xlabel(r"$X_n$", fontsize=14)
plt.ylabel(r"$y$", fontsize=14, rotation=0)
plt.scatter(X_reg, y_non_linear, color='c')
plt.axis([0, 2.1, 0, 25])
for alpha, model in zip(non_linear_alphas, non_linear_models):
predictions = model.predict(X_reg_space_poly)
plt.plot(X_reg_space, predictions, label=r"$\alpha={}$".format(alpha))
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We can observe that, when using regularization, with higher values of $\alpha$ the model is less prone to overfitting. When $\alpha=0$, i.e., there is no regularization, we observe the model tries to adjust to several points (overfitting); while $\alpha=1$ is a smoother function, it constraints the model parameters to avoid overfitting. Lasso Regression*Least Absolute Shrinkage and Selection Operator Regression* simply called *Lasso Regression*. Just as Ridge regression, it consists in adding another term to the cost function, but uses the $l_1$ norm of the weight vector instead of half the square of the $l_2$ norm.$$J(\theta)=MSE(\theta) + \alpha \sum_{i=1}^n |\theta_i|$$Here, we observe the same data as before but we use Lasso models instead of ridge Models
###Code
# here we define the alpha parameters we'll need to pass to Rdige Regression
linear_alphas = [0.01, 0.1, 1]
non_linear_alphas = [0.1, 1*10**-7, 1]
degree=40
poly_transformer = PolynomialFeatures(degree=degree)
X_reg_space = np.linspace(round(X_reg.min()), round(X_reg.max()), X_reg.shape[0]).reshape(X_reg.shape[0], 1)
X_reg_space_poly = poly_transformer.fit_transform(X_reg_space)
linear_models = train_regularized_models(X_reg, y_linear, regularization='lasso', alphas=linear_alphas)
linear_models_non_linear_data = train_regularized_models(X_reg, y_non_linear, regularization='lasso', alphas=linear_alphas)
non_linear_models = train_regularized_models(X_reg, y_non_linear, regularization='lasso', linear=False, degree=degree, alphas=non_linear_alphas)
plt.figure(figsize=(15, 6))
plt.subplot(131)
plt.title('Linear data', fontsize=16)
plt.xlabel(r"$X_n$", fontsize=14)
plt.ylabel(r"$y$", fontsize=14, rotation=0)
plt.scatter(X_reg, y_linear, color='c')
for alpha, model in zip(linear_alphas, linear_models):
predictions = model.predict(X_reg)
plt.plot(X_reg, predictions, label=r"$\alpha={}$".format(alpha))
plt.legend()
plt.subplot(132)
plt.title('Linear model - Non linear data', fontsize=16)
plt.xlabel(r"$X_n$", fontsize=14)
plt.ylabel(r"$y$", fontsize=14, rotation=0)
plt.scatter(X_reg, y_non_linear, color='c')
plt.axis([0, 2, 0, 25])
for alpha, model in zip(non_linear_alphas, linear_models_non_linear_data):
predictions = model.predict(X_reg)
plt.plot(X_reg, predictions, label=r"$\alpha={}$".format(alpha))
plt.legend()
plt.subplot(133)
plt.title('Non linear model - Non linear data', fontsize=16)
plt.xlabel(r"$X_n$", fontsize=14)
plt.ylabel(r"$y$", fontsize=14, rotation=0)
plt.scatter(X_reg, y_non_linear, color='c')
plt.axis([0, 2, 0, 25])
for alpha, model in zip(non_linear_alphas, non_linear_models):
predictions = model.predict(X_reg_space_poly)
plt.plot(X_reg_space, predictions, label=r"$\alpha={}$".format(alpha))
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Notice with Lasso, the regularization is stronger, the model is smoother, and in theory, would be able to generalize better.**Note:** The good thing about Lasso is that it tends to eliminate the less important features, i.e., the weights becomes zero. Elastic NetElastic net is a middle ground between Ridge and Lasso. The regularization term is a simple mix of both Ridge and Lasso's regularization terms, and you can control the mix ratio $r$. When $r=0$, Elastic Net is equivalent to Ridge, and when $r=1$, it's equivalent to Lasso.$$J(\theta)=MSE(\theta) + r\alpha \sum_{i=1}^n |\theta_i| + \frac{1-r}{2}\alpha \sum_{i=1}^n \theta_i^2$$ When to use which?* In general, it is better to use some level of regularization, so plain LR should be avoided if possible.* Ridge is a good default,* But if you suspect some features might not be useful, you should prefer Lasso or Elastic Net as they tend to reduce useless features.* Generally Elastic Net is preferred over Lasso since the later can behave erratically if the number of features is greater than the number of instances or when several features are strongly correlated.Below there is the same case but using elastic net invoked it as: ```ElasticNet(alpha=0.1, l1_ratio=0.5)``` where ```l1_ratio``` is the mix ratio $r$ mentioned above, so, a value of $r=0.5$ means that is using half both regularization terms.
###Code
# here we define the alpha parameters we'll need to pass to Rdige Regression
linear_alphas = [0.01, 0.1, 1]
non_linear_alphas = [0.1, 1*10**-7, 1]
degree=40
poly_transformer = PolynomialFeatures(degree=degree)
X_reg_space = np.linspace(round(X_reg.min()), round(X_reg.max()), X_reg.shape[0]).reshape(X_reg.shape[0], 1)
X_reg_space_poly = poly_transformer.fit_transform(X_reg_space)
linear_models = train_regularized_models(X_reg, y_linear, regularization='enet', alphas=linear_alphas)
linear_models_non_linear_data = train_regularized_models(X_reg, y_non_linear, regularization='enet', alphas=linear_alphas)
non_linear_models = train_regularized_models(X_reg, y_non_linear, regularization='enet', linear=False, degree=degree, alphas=non_linear_alphas)
plt.figure(figsize=(15, 6))
plt.subplot(131)
plt.title('Linear data', fontsize=16)
plt.xlabel(r"$X_n$", fontsize=14)
plt.ylabel(r"$y$", fontsize=14, rotation=0)
plt.scatter(X_reg, y_linear, color='c')
for alpha, model in zip(linear_alphas, linear_models):
predictions = model.predict(X_reg)
plt.plot(X_reg, predictions, label=r"$\alpha={}$".format(alpha))
plt.legend()
plt.subplot(132)
plt.title('Linear model - Non linear data', fontsize=16)
plt.xlabel(r"$X_n$", fontsize=14)
plt.ylabel(r"$y$", fontsize=14, rotation=0)
plt.scatter(X_reg, y_non_linear, color='c')
plt.axis([0, 2, 0, 25])
for alpha, model in zip(non_linear_alphas, linear_models_non_linear_data):
predictions = model.predict(X_reg)
plt.plot(X_reg, predictions, label=r"$\alpha={}$".format(alpha))
plt.legend()
plt.subplot(133)
plt.title('Non linear model - Non linear data', fontsize=16)
plt.xlabel(r"$X_n$", fontsize=14)
plt.ylabel(r"$y$", fontsize=14, rotation=0)
plt.scatter(X_reg, y_non_linear, color='c')
plt.axis([0, 2, 0, 25])
for alpha, model in zip(non_linear_alphas, non_linear_models):
predictions = model.predict(X_reg_space_poly)
plt.plot(X_reg_space, predictions, label=r"$\alpha={}$".format(alpha))
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We get pretty good results! Logistic RegressionThe logistic regression (also called Logit Regression) can be used to estimate the probability that an instance belongs to a particlar binary class e.g., what is the probability that an email is spam?.Similar to a classic Linear Regression model, a Logistic Regression computes a weigheted sum of the input features plus the bias term, but instead of outputting the plain result, it outputs the *logistic* of this result.The logistic is also called *logit* denoted by $\sigma(\cdot)$ which is a *sigmoind function*: $f(x)=\frac{1}{1+e^{-x}}$. The vectorized form of the Logistic Regression model is then:$$\hat{p}=h_\theta(X)=\sigma(\theta^T \cdot X)$$Notice we are basically passing the result of the original function to the sigmoid function to get the logit. Graphically, the sigmoid function is shaped like the letter 'S':
###Code
X = np.linspace(-10, 10, 100)
y = X #assume a linear model where x=y for ilustration purposes
logits = 1 / (1 + np.exp(-y))
plt.figure(figsize=(10, 6))
plt.plot(X, logits, label=r"$\sigma(x)=\frac{1}{1 + e^{-x}}$")
plt.grid(True)
plt.xticks([-10, -5, 0, 5, 10])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
So, values above 0.5 belongs to one class, and values below 0.5 belongs to the other class. The logit can be interpreted as the probability that the instance belongs to one of each.$$\hat{y} = \begin{cases}0 \quad if \quad \hat{p}=0.5.\end{cases}$$ How is trained?The idea is to find the parameter vector $\theta$ such that makes high propabilities for $y=1$ and low probabilities for $y=0$. The cost function in this case would be:$$c(\theta) = \begin{cases}-log(\hat{p}) \quad\quad if \quad y=1,\\ -log(1-\hat{p}) \quad if \quad y=0. \end{cases}$$This cost function makes sense becayse $-log(t)$ grows very large when $t$ approaches $0$, so the cost will be large if the model estimates a probability close to $0$ for a positive instance, and it will also be very large if the model estimates a probability close to 1 for a negative instance. On the other hand, $-log(t)$ is close to $0$ when $t$ is close to 1, so the cost will be close to $0$ if the extimated probabilitu is close to $0$ for negative instance or close to 1 for positive instance.The complete cost function is the *log loss*, also called *binary cross-entropy* which consists in averaging all the cost over all instances:$$J(\theta)=-\frac{1}{m}\sum_{i=1}^m [y^{(i)}log(\hat{p}^{(i)})+(1-y^{(i)})log(1-\hat{p}^{(i)})]$$Sadly, there is no known closed form of this function to compute the value of $\theta$ that minimizes this cost function. But this function is convex so Gradient Descent will be able to find the global minima (given a good mix of learning rate and epochs). So we would need the partial derivatives of the cost function with regards to the $j^{th}$ model parameter $\theta_j$:$$\frac{\partial}{\partial\theta_j} J(\theta)=\frac{1}{m}\sum_{i=1}^{m} (\sigma(\theta^{T} \cdot x^{(i)}) - y^{(i)} ) x_j^{(i)}$$This equation looks very similar to the **partial derivative** of the Batch Gradient Descent section of this notebook. For SGD the formula is the same just take a single instance, and for Mini-Batch, just would use a mini-batch at a time. Decision boundariesThis is an example using the iris dataset:
###Code
from sklearn import datasets
iris = datasets.load_iris()
print(iris.DESCR)
from sklearn.linear_model import LogisticRegression
X = iris['data'][:, 3:] # select only the petal width
# Remember that logistic regression it's capable of binary classification
y = (iris['target'] == 2).astype(np.int) # binarize the dataset, 1 if Iris-Virginica, else 0
log_reg = LogisticRegression()
log_reg.fit(X, y)
###Output
_____no_output_____
###Markdown
Let's look at the model's estimated probabilities for flowers with petal widths varying from 0 to 3cm
###Code
X_new = np.linspace(0, 3, 1000).reshape(-1, 1) # equivalent to reshape(1000, 1)
y_proba = log_reg.predict_proba(X_new) # Get the probabilities for each class instead of the predicted class
# take the first element where the probability is greater or equals to 0.5
decision_boundary = X_new[y_proba[:, 1] >= 0.5][0]
plt.figure(figsize=(10, 6))
plt.plot(X[y==0], y[y==0], "bs")
plt.plot(X[y==1], y[y==1], "g^")
plt.plot(X_new, y_proba[:, 1], 'g-', label='Iris-virginica')
plt.plot(X_new, y_proba[:, 0], 'b--', label='Not Iris-virginica')
plt.plot([decision_boundary, decision_boundary], [-1, 2], 'k:')
plt.axis([0, 3, 0, 1])
plt.text(decision_boundary-0.23, 0.1, 'Decision boundary')
plt.arrow(decision_boundary, 0.9, 0.2, 0, head_width=0.05, head_length=0.1, fc='g', ec='g')
plt.arrow(decision_boundary, 0.05, -0.2, 0, head_width=0.05, head_length=0.1, fc='b', ec='b')
plt.xlabel('Petal width (cm)')
plt.ylabel('Probability')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The plot tell us that about 1.6cm, since there is some overlap between the lenght of both petal widths, there is a 50% chance of the instance to be both.When we use `model.predict()` we get the predicted class, either $0$ or $1$, if we use `model.predict_proba()` we get the actual probability of both classes.Let's observe the decision boundary when using two features:
###Code
X = iris['data'][:, 2:] # select the petal width and lenght
# Remember that logistic regression it's capable of binary classification
y = (iris['target'] == 2).astype(np.int) # binarize the dataset, 1 if Iris-Virginica, else 0
log_reg = LogisticRegression(solver='liblinear', C=10**10, random_state=42)
log_reg.fit(X, y)
plt.figure(figsize=(10, 6))
# create an equidistant linear space for both attributes
x0, x1 = np.meshgrid(
np.linspace(2.9, 7, 500).reshape(-1, 1), # X, petal length
np.linspace(0.8, 3, 200).reshape(-1, 1) # Y, petal width
)
# then generate a new X with them, concat the results of both
# attributes. produces (100000, 2) shape
X_new = np.c_[x0.ravel(), x1.ravel()]
y_proba = log_reg.predict_proba(X_new)
zz = y_proba[:, 1].reshape(x0.shape)
contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg)
# NOTE: need to investigate this part.
left_right = np.array([2.9, 7])
boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1]
plt.clabel(contour, inline=1, fontsize=12)
plt.plot(left_right, boundary, "k--", linewidth=3)
plt.plot(X[y==1, 0], X[y==1, 1], 'g^', label='Iris Virginica')
plt.plot(X[y==0, 0], X[y==0, 1], 'bs', label='Not Irisi Virginica')
plt.xlabel('Petal lenght')
plt.ylabel('Petal width')
plt.axis([2.9, 7, 0.8, 2.7])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Softmax regressionThe softmax regression or *Multinomial Logistic Regression* is the generalization of the Logistic regression which is able to support multiple classes without ensambling or comnining multiple binary classifiers.The idea is to compute a score for an instance $x$ as $s_k(x)$ for each class $k$ then estimate the probability of each class by applying the *softmax function* (also called the normalized exponential) to the scores. The equation is similar to the linear regression:$$S_k=(\theta^{(k)})^T \cdot X$$Each class has its own dedicated parameter vector $\theta^{(k)}$ typically stored as rows in parameter matrix $\theta.$ Once the scores for each class are computed for each instance $x$, we proceed to estimate the probability $\hat{p_k}$ that the instance belongs to the class $k$ using the softmax function.$$\hat{p_k} = \sigma(S(X))_k = \frac{\exp(S_k(X))}{\sum_{j=1}^K \exp(S_j(X))}$$Where:* $K$ is the number of classes.* $S(X)$ is a vector containing the scores of each class for instance $x$* $\sigma(S(X))_k$ is the estimated probability that the instance $x$ belongs to the class $k$ given the scores of each class for that instance.Just like the Logistic Regression implementation, the Softmax Regression predicts the class with the highest probability value$$\hat{y}=argmax_k \sigma(S(X))_k = argmax_k S_k(X) = argmax_k((\theta^{(k)})^T \cdot X)$$The $argmax$ operator returns the value that maximizes a function. In this case, it will return the value of $k$ that maximizes the estimated probability $\sigma(S(X))_k$**Note:** Softmax is for mutially exclusive classses, i.e., there is just a single predicted output How is trained?The cost function in this case, since we use multiple classes, is called *cross entropy*. The objective is to maximize the probability of a possitive class, and at the same time, to penalize the low probability target classes. *Cross entropy* achieve this.$$J(\Theta)=-\frac{1}{m}\sum_{i=1}^m \sum_{k=1}^k y_k^{(i)}log(\hat{p}_k^{(i)})$$Where $y_k^{(i)}$ is equal to 1 if the target class for the $i^{th}$ instances is $k$, otherwise, it is equal to $0$.Notice that when there are just two classes, i.e., $K=2$, this cost function is equivalent to Logistic Regression's cost function, *log loss*The gradient vector of this cost function with regards to $\theta^{(k)}$ is given by:$$\nabla_\theta^{(k)}J(\Theta) = \frac{1}{m}\sum_{i=1}^m (\hat{p}_k^{(i)} - y_k^{(i)})X^{(i)}$$Scikit-learn's `LogisticRegression` uses a one-versus all approach by default when presented with multiple classes, but we can pass the `multi_class='multinobial'` hyperparameter to use Softmax Regression instead. For Softmax Regression it is important to use a solver supported by this function like `lbfgs`.From https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html```solver : str, {‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’}, optional (default=’liblinear’).Algorithm to use in the optimization problem.For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and ‘saga’ are faster for large ones.For multiclass problems, only ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ handle multinomial loss; ‘liblinear’ is limited to one-versus-rest schemes.‘newton-cg’, ‘lbfgs’, ‘sag’ and ‘saga’ handle L2 or no penalty‘liblinear’ and ‘saga’ also handle L1 penalty‘saga’ also supports ‘elasticnet’ penalty‘liblinear’ does not handle no penalty```
###Code
from sklearn.linear_model import LogisticRegression
X = iris['data'][:, (2, 3)] # petal lenght and petal width
y = iris['target']
softmax_reg = LogisticRegression(solver='lbfgs', multi_class='multinomial', C=10)
softmax_reg.fit(X, y)
sample = [5, 2] # an iris with 5cm petal lenght and 2cm petal width
prediction = softmax_reg.predict([sample])
prediction_prob = softmax_reg.predict_proba([sample]).max()
print(f"An iris with {sample[0]}cm petal lenght and \
{sample[1]}cm petal width, \
is classified as {prediction[0]} with probability of {prediction_prob:.2f}")
from matplotlib.colors import ListedColormap
plt.figure(figsize=(15, 6))
# create an equidistant linear space for both attributes
x0, x1 = np.meshgrid(
np.linspace(0, 8, 500).reshape(-1, 1), # X, petal length
np.linspace(0, 3.5, 200).reshape(-1, 1) # Y, petal width
)
# then generate a new X with them, concat the results of both
# attributes. produces (100000, 2) shape
X_new = np.c_[x0.ravel(), x1.ravel()]
y_proba = softmax_reg.predict_proba(X_new)
y_predict = softmax_reg.predict(X_new)
zz1 = y_proba[:, 1].reshape(x0.shape)
zz = y_predict.reshape(x0.shape)
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x0, x1, zz, cmap=custom_cmap)
contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg)
plt.clabel(contour, inline=1, fontsize=12)
plt.plot(X[y==0, 0], X[y==0, 1], 'yo', label='Iris-Setosa')
plt.plot(X[y==1, 0], X[y==1, 1], 'bs', label='Iris-Versicolour')
plt.plot(X[y==2, 0], X[y==2, 1], 'g^', label='Iris-Virginica')
plt.xlabel('Petal lenght')
plt.ylabel('Petal width')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Excercises1. What Linear Regression training algorithm can you use if you have a training set with millions of features? A/ A batch, stochastic or mini-batch approach would be better2. Suppose the features in your training set have very different scales. What algorithms might suffer from this, and how? What can you do about it? A/ Gradient descent approaches suffer from this as the 'bowl' is enlongated making that reach the global minima takes longer. It is important to use the same scale for all the features, we can use the scikit-learn's `StandardScaler` for this.3. Can Gradient Descent get stuck in a local minimum when training a Logistic Regression model? A/ No, because the cost function is convex, so at some point, given a good learning rate and epochs, the algorithm will reach the local minima. A convex function has a bowl shape, so the path to the global minima is clear.4. Do all Gradient Descent algorithms lead to the same model provided you let them run long enough? A/ No, BatchGD uses the whole set so it is consistent and approaches the global minima smoothly, StochasticGD uses one sample per iteration, so adjusts to the model parameters are influenced by the sample being iterated, it has a jumpy behavior en might never reach the gloabl minima, but approach closely. Mini-BatchGD is the combination of both, depending on the batch size, the learning rate and the regularization strategy, it can reach or not the global minima. So the three of them, despite being able to deliver good results, they might not necessarily be the same.5. Suppose you use Batch Gradient Descent and you plot the validation error at every epoch. If you notice that the validation error consistently goes up, what is likely going on? How can you fix this? A/ If the validation error is going up, it means the adjustments to the model parameters are no longer working good, probably the model approached the global minima but got out, this can be cause by using a large learning rate that causes the gradient step to be too long. To solve it, we can adjust the hyperparameters, epochs and learning rate, or do early stopping and break the training at the best parameters.6. Is it a good idea to stop Mini-batch Gradient Descent immediately when the validation error goes up? A/ No, we should do early stopping at least when we don't see changes in the validation error between epochs.7. Which Gradient Descent algorithm (among those we discussed) will reach the vicinity of the optimal solution the fastest? Which will actually converge? How can you make the others converge as well? A/ Mini-Batch would reach the optimal solution faster, but it has the chance to never reach the optimal solution, BatchGD will converge but it will take a bit longer, and stochastic probably will never reach the optimal solution and it could take longer as it trains sample by sample in each epoch. We can make then converge playing with the hyperparameters, and ensuring the dataset is using the same scale.8. Suppose you are using Polynomial Regression. You plot the learning curves and you notice that there is a large gap between the training error and the validation error. What is happening? What are three ways to solve this? A/ If there is a large gap between training and validation, the model is probably overfitting as is not being able to generalize good enough. To solve this we can: * Given that we are working with polynomial regression, we might want to use a different degree polynomial for the data. * Add more training data * Add regularization9. Suppose you are using Ridge Regression and you notice that the training error and the validation error are almost equal and fairly high. Would you say that the model suffers from high bias or high variance? Should you increase the regularization hyperparameter α or reduce it? A/ The model is suffering from underfitting, hence high bias, i.e., the model can be too constrained, probably reducing the $\alpha$ parameter would help as will limit the parameter freedom and generalize more the data.10. Why would you want to use: * Ridge Regression instead of plain Linear Regression (i.e., without any regularization)? A/ If i have non linear data, and I want to constrain the freedom of the parameters of the model, or if I just simply want to constrain the model parameters * Lasso instead of Ridge Regression? A/ Lasso has the advantage of penalizing less important features, so at the end, if a feature is not contributing much to the target value, it will simply discard it. * Elastic Net instead of Lasso? A/ Lasso can behave erratically if the number of features is greater than the number of samples for the training phase. 11. Suppose you want to classify pictures as outdoor/indoor and daytime/nighttime. Should you implement two Logistic Regression classifiers or one Softmax Regression classifier? A/ If i have no control over if the input is of one of the two groups, i.e., I would never get asked "Is this outdoor or Indoor" seprated from "Is this daytime or nighttime". I would prefer Softmax Regression Classifier as it supports multinomial classification. UtilsGeometric sequence generator for the learning schedule
###Code
def lrs(eta0, rate):
n = 0
while(True):
yield (eta0 * pow( 1/ (1 + rate), n))
n += 1
etas = lrs(0.1, 0.1/50)
next(etas)
test = np.array([1.0, float('NaN')])
import sys
import random
import math
def ceil_root_random_even(top: int) -> int:
number = random.randrange(0, top + 1, 2)
number = number**(1/2)
number = math.floor(number)
return number % 5
ceil_root_random_even(100)
help(random.randint)
def get_third_lowest(inp: list) -> int:
inp.sort(reverse=False)
return inp[2]
get_third_lowest(list(range(4, 10)))
def leet_speak(normal: str) -> str:
words = normal.split()
return ' '.join([''.join(
[letter.upper() if i % 2 == 0 else letter for i, letter in enumerate(word)]
) for word in words])
leet_speak('This is a test.')
def upper_vowels(normal: str)-> str:
vowels = 'aeiou'
return ''.join([''.join(
[letter.upper() if letter in vowels else letter for letter in word]
) for word in normal])
upper_vowels('This is a test.')
###Output
_____no_output_____
|
examples/reactivity.ipynb
|
###Markdown
In the following cell, `b` is not defined yet:
###Code
a = b + 1
a
###Output
_____no_output_____
###Markdown
But the variable `a` *is* defined, its *value* is not (`None`).Because `b` was used in the definition of `a`, `b` is now implicitly defined, and its value is also undefined.
###Code
b
###Output
_____no_output_____
###Markdown
Now let's give a value to `b`.
###Code
b = 0
###Output
_____no_output_____
###Markdown
Its representation in cell 2 is immediately updated, as is the representation of `a`, which depends on `b`, in cell 1.Each time we change `b`'s value, cells 1 and 2 update accordingly.
###Code
b = 1
b = 2
###Output
_____no_output_____
###Markdown
We can of course build much more complex data flows, by defining variables on top of others.
###Code
from math import sin, cos, pi
y = sin(x) + cos(x) + a
###Output
_____no_output_____
###Markdown
The directed graph can be visualized.
###Code
y.visualize()
y
x
x = 0
x = pi / 2
x = pi
x = 3 * pi / 2
x = 2 * pi
b = 3
###Output
_____no_output_____
###Markdown
For a variable to be redefined, it must be deleted first. Previous representations still refer to the previous definition, though (see cell 7).
###Code
del y
y = 2 * a
y
b = 4
###Output
_____no_output_____
###Markdown
Reactivity Using operators
###Code
a = b + 1
a
b
b = 0
b = 1
b = 2
###Output
_____no_output_____
###Markdown
Using functions
###Code
from math import sin, cos, pi
y = sin(x) + cos(x)
y
x
x = 0
x = pi / 2
x = pi
x = 3 * pi / 2
x = 2 * pi
###Output
_____no_output_____
|
docs/ipynb/04-getting-started-current-induced-dw-motion.ipynb
|
###Markdown
Getting started 04: Current induced domain wall motion> Interactive online tutorial:> [](https://mybinder.org/v2/gh/ubermag/oommfc/master?filepath=docs%2Fipynb%2Findex.ipynb)In this tutorial we show how Zhang-Li spin transfer torque (STT) can be included in micromagnetic simulations. To illustrate that, we will try to move a domain wall pair using spin-polarised current.Let us simulate a two-dimensional sample with length $L = 500 \,\text{nm}$, width $w = 20 \,\text{nm}$ and discretisation cell $(2.5 \,\text{nm}, 2.5 \,\text{nm}, 2.5 \,\text{nm})$. The material parameters are:- exchange energy constant $A = 15 \,\text{pJ}\,\text{m}^{-1}$,- Dzyaloshinskii-Moriya energy constant $D = 3 \,\text{mJ}\,\text{m}^{-2}$,- uniaxial anisotropy constant $K = 0.5 \,\text{MJ}\,\text{m}^{-3}$ with easy axis $\mathbf{u}$ in the out of plane direction $(0, 0, 1)$,- gyrotropic ratio $\gamma = 2.211 \times 10^{5} \,\text{m}\,\text{A}^{-1}\,\text{s}^{-1}$, and- Gilbert damping $\alpha=0.3$.
###Code
import oommfc as oc
import discretisedfield as df
import micromagneticmodel as mm
%matplotlib inline
# Definition of parameters
L = 500e-9 # sample length (m)
w = 20e-9 # sample width (m)
d = 2.5e-9 # discretisation cell size (m)
Ms = 5.8e5 # saturation magnetisation (A/m)
A = 15e-12 # exchange energy constant (J/)
D = 3e-3 # Dzyaloshinkii-Moriya energy constant (J/m**2)
K = 0.5e6 # uniaxial anisotropy constant (J/m**3)
u = (0, 0, 1) # easy axis
gamma0 = 2.211e5 # gyromagnetic ratio (m/As)
alpha = 0.3 # Gilbert damping
# Mesh definition
p1 = (0, 0, 0)
p2 = (L, w, d)
cell = (d, d, d)
region = df.Region(p1=p1, p2=p2)
mesh = df.Mesh(region=region, cell=cell)
# Micromagnetic system definition
system = mm.System(name='domain_wall_pair')
system.energy = mm.Exchange(A=A) + \
mm.DMI(D=D, crystalclass="Cnv") + \
mm.UniaxialAnisotropy(K=K, u=u)
system.dynamics = mm.Precession(gamma0=gamma0) + mm.Damping(alpha=alpha)
###Output
_____no_output_____
###Markdown
Because we want to move a DW pair, we need to initialise the magnetisation in an appropriate way before we relax the system.
###Code
def m_value(pos):
x, y, z = pos
if 20e-9 < x < 40e-9:
return (0, 0, -1)
else:
return (0, 0, 1)
system.m = df.Field(mesh, dim=3, value=m_value, norm=Ms)
system.m.z.plane('z').k3d_scalar()
###Output
_____no_output_____
###Markdown
Now, we can relax the magnetisation.
###Code
md = oc.MinDriver()
md.drive(system)
system.m.z.plane('z').k3d_scalar()
###Output
Running OOMMF (ExeOOMMFRunner) [2020/06/12 00:40]... (8.4 s)
###Markdown
Now we can add the STT term to the dynamics equation.
###Code
ux = 400 # velocity in x-direction (m/s)
beta = 0.5 # non-adiabatic STT parameter
system.dynamics += mm.ZhangLi(u=ux, beta=beta) # please notice the use of `+=` operator
###Output
_____no_output_____
###Markdown
And drive the system for $0.5 \,\text{ns}$:
###Code
td = oc.TimeDriver()
td.drive(system, t=0.5e-9, n=100)
system.m.z.plane('z').k3d_scalar()
###Output
Running OOMMF (ExeOOMMFRunner) [2020/06/12 00:40]... (6.0 s)
###Markdown
We see that the DW pair has moved to the positive $x$ direction. Exercise 1Modify the code below (which is a copy of the example from above) to obtain one domain wall instead of a domain wall pair and move it using the same current.
###Code
# Definition of parameters
L = 500e-9 # sample length (m)
w = 20e-9 # sample width (m)
d = 2.5e-9 # discretisation cell size (m)
Ms = 5.8e5 # saturation magnetisation (A/m)
A = 15e-12 # exchange energy constant (J/)
D = 3e-3 # Dzyaloshinkii-Moriya energy constant (J/m**2)
K = 0.5e6 # uniaxial anisotropy constant (J/m**3)
u = (0, 0, 1) # easy axis
gamma0 = 2.211e5 # gyromagnetic ratio (m/As)
alpha = 0.3 # Gilbert damping
# Mesh definition
p1 = (0, 0, 0)
p2 = (L, w, d)
cell = (d, d, d)
region = df.Region(p1=p1, p2=p2)
mesh = df.Mesh(region=region, cell=cell)
# Micromagnetic system definition
system = mm.System(name='domain_wall')
system.energy = mm.Exchange(A=A) + \
mm.DMI(D=D, crystalclass='Cnv') + \
mm.UniaxialAnisotropy(K=K, u=u)
system.dynamics = mm.Precession(gamma0=gamma0) + mm.Damping(alpha=alpha)
def m_value(pos):
x, y, z = pos
# Modify the following line
if 20e-9 < x < 40e-9:
return (0, 0, -1)
else:
return (0, 0, 1)
# We have added the y-component of 1e-8 to the magnetisation to be able to
# plot the vector field. This will not be necessary in the long run.
system.m = df.Field(mesh, dim=3, value=m_value, norm=Ms)
system.m.z.plane('z').k3d_scalar()
md = oc.MinDriver()
md.drive(system)
system.m.z.plane('z').k3d_scalar()
ux = 400 # velocity in x direction (m/s)
beta = 0.5 # non-adiabatic STT parameter
system.dynamics += mm.ZhangLi(u=ux, beta=beta)
td = oc.TimeDriver()
td.drive(system, t=0.5e-9, n=100)
system.m.z.plane('z').k3d_scalar()
###Output
Running OOMMF (ExeOOMMFRunner) [2020/06/12 00:40]... (5.6 s)
###Markdown
Getting started 04: Current induced domain wall motion> Interactive online tutorial:> [](https://mybinder.org/v2/gh/ubermag/oommfc/master?filepath=docs%2Fipynb%2Findex.ipynb)In this tutorial we show how Zhang-Li spin transfer torque (STT) can be included in micromagnetic simulations. To illustrate that, we will try to move a domain wall pair using spin-polarised current.Let us simulate a two-dimensional sample with length $L = 500 \,\text{nm}$, width $w = 20 \,\text{nm}$ and discretisation cell $(2.5 \,\text{nm}, 2.5 \,\text{nm}, 2.5 \,\text{nm})$. The material parameters are:- exchange energy constant $A = 15 \,\text{pJ}\,\text{m}^{-1}$,- Dzyaloshinskii-Moriya energy constant $D = 3 \,\text{mJ}\,\text{m}^{-2}$,- uniaxial anisotropy constant $K = 0.5 \,\text{MJ}\,\text{m}^{-3}$ with easy axis $\mathbf{u}$ in the out of plane direction $(0, 0, 1)$,- gyrotropic ratio $\gamma = 2.211 \times 10^{5} \,\text{m}\,\text{A}^{-1}\,\text{s}^{-1}$, and- Gilbert damping $\alpha=0.3$.
###Code
import discretisedfield as df
import micromagneticmodel as mm
import oommfc as oc
%matplotlib inline
# Definition of parameters
L = 500e-9 # sample length (m)
w = 20e-9 # sample width (m)
d = 2.5e-9 # discretisation cell size (m)
Ms = 5.8e5 # saturation magnetisation (A/m)
A = 15e-12 # exchange energy constant (J/)
D = 3e-3 # Dzyaloshinkii-Moriya energy constant (J/m**2)
K = 0.5e6 # uniaxial anisotropy constant (J/m**3)
u = (0, 0, 1) # easy axis
gamma0 = 2.211e5 # gyromagnetic ratio (m/As)
alpha = 0.3 # Gilbert damping
# Mesh definition
p1 = (0, 0, 0)
p2 = (L, w, d)
cell = (d, d, d)
region = df.Region(p1=p1, p2=p2)
mesh = df.Mesh(region=region, cell=cell)
# Micromagnetic system definition
system = mm.System(name="domain_wall_pair")
system.energy = (
mm.Exchange(A=A) + mm.DMI(D=D, crystalclass="Cnv") + mm.UniaxialAnisotropy(K=K, u=u)
)
system.dynamics = mm.Precession(gamma0=gamma0) + mm.Damping(alpha=alpha)
###Output
_____no_output_____
###Markdown
Because we want to move a DW pair, we need to initialise the magnetisation in an appropriate way before we relax the system.
###Code
def m_value(pos):
x, y, z = pos
if 20e-9 < x < 40e-9:
return (0, 0, -1)
else:
return (0, 0, 1)
system.m = df.Field(mesh, dim=3, value=m_value, norm=Ms)
system.m.z.plane("z").k3d_voxels()
###Output
_____no_output_____
###Markdown
Now, we can relax the magnetisation.
###Code
md = oc.MinDriver()
md.drive(system)
system.m.z.plane("z").k3d_voxels()
###Output
2020/03/09 10:50: Running OOMMF (domain_wall_pair.mif) ... (4.6 s)
###Markdown
Now we can add the STT term to the dynamics equation.
###Code
ux = 400 # velocity in x-direction (m/s)
beta = 0.5 # non-adiabatic STT parameter
system.dynamics += mm.ZhangLi(u=ux, beta=beta) # please notice the use of `+=` operator
###Output
_____no_output_____
###Markdown
And drive the system for $0.5 \,\text{ns}$:
###Code
td = oc.TimeDriver()
td.drive(system, t=0.5e-9, n=100)
system.m.z.plane("z").k3d_voxels()
###Output
2020/03/09 10:51: Running OOMMF (domain_wall_pair.mif) ... (3.4 s)
###Markdown
We see that the DW pair has moved to the positive $x$ direction. Exercise 1Modify the code below (which is a copy of the example from above) to obtain one domain wall instead of a domain wall pair and move it using the same current.
###Code
# Definition of parameters
L = 500e-9 # sample length (m)
w = 20e-9 # sample width (m)
d = 2.5e-9 # discretisation cell size (m)
Ms = 5.8e5 # saturation magnetisation (A/m)
A = 15e-12 # exchange energy constant (J/)
D = 3e-3 # Dzyaloshinkii-Moriya energy constant (J/m**2)
K = 0.5e6 # uniaxial anisotropy constant (J/m**3)
u = (0, 0, 1) # easy axis
gamma0 = 2.211e5 # gyromagnetic ratio (m/As)
alpha = 0.3 # Gilbert damping
# Mesh definition
p1 = (0, 0, 0)
p2 = (L, w, d)
cell = (d, d, d)
region = df.Region(p1=p1, p2=p2)
mesh = df.Mesh(region=region, cell=cell)
# Micromagnetic system definition
system = mm.System(name="domain_wall")
system.energy = (
mm.Exchange(A=A) + mm.DMI(D=D, crystalclass="Cnv") + mm.UniaxialAnisotropy(K=K, u=u)
)
system.dynamics = mm.Precession(gamma0=gamma0) + mm.Damping(alpha=alpha)
def m_value(pos):
x, y, z = pos
# Modify the following line
if 20e-9 < x < 40e-9:
return (0, 0, -1)
else:
return (0, 0, 1)
# We have added the y-component of 1e-8 to the magnetisation to be able to
# plot the vector field. This will not be necessary in the long run.
system.m = df.Field(mesh, dim=3, value=m_value, norm=Ms)
system.m.z.plane("z").k3d_voxels()
md = oc.MinDriver()
md.drive(system)
system.m.z.plane("z").k3d_voxels()
ux = 400 # velocity in x direction (m/s)
beta = 0.5 # non-adiabatic STT parameter
system.dynamics += mm.ZhangLi(u=ux, beta=beta)
td = oc.TimeDriver()
td.drive(system, t=0.5e-9, n=100)
system.m.z.plane("z").k3d_voxels()
###Output
2020/03/09 10:51: Running OOMMF (domain_wall.mif) ... (3.5 s)
|
Ex1_Train_Reuploading_Circuit_Pytorch.ipynb
|
###Markdown
Example: Pytorch version for QNN trained on DataThis code can reproduce the QNN prediction map and the Hessian distribution with pytorch.
###Code
import numpy as np
from vqc_loss_landscapes.torchcirq import *
from vqc_loss_landscapes.data_helper import *
import torch
from torch.autograd import Variable
from vqc_loss_landscapes.complex import *
import qutip.qip.circuit as QCirc
from tqdm import tqdm as tqdm
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm
%load_ext autoreload
%autoreload 2
device = "cpu"
def load_circle_data(path):
np_file = np.load(path+".npy", allow_pickle=True)
return np_file.item()
def test_loss(circ, params, x_train, y_train, init, measurements=None, device="cpu"):
loss = 0.0
i = 0
for x,y in zip(x_train, y_train):
# print("test loss: ", i)
# i+=1
y = 2*(y - 1/2)
out1 = matmul(circ(params, x=x, device=device), init)
out1_copy = out1.clone().detach().requires_grad_(True)
out2 = matmul(measurements, out1)
out1 = inner_prod(out1_copy, out2) # this is already <Psi|0><0|Psi>
# print(out1[0])
loss += (y-out1[0])**2
return loss/len(x_train)
def predicted_labels(output_fidelity):
output_labels = [np.argmax(o) for o in output_fidelity]
return np.array(output_labels)
def prediction(params, x=None, y=None, measurements = None, device="cpu"):
"""For prediction of model measure in both directions"""
output_fidelity = []
for i in tqdm(range(len(x))):
fidelities = []
out1 = matmul(circ(params, x=x[i], device=device), init)
out1_copy = out1.clone().detach().requires_grad_(True)
out2 = matmul(measurements, out1)
out1 = inner_prod(out1_copy, out2)
fidelities.append(out1[0].detach().cpu())
del out1
output_fidelity.append(fidelities)
#predicted = predicted_labels(output_fidelity)
return output_fidelity
def get_fidelity():
fidl= prediction(params, x=X_map, y=None, measurements = model.measures, device=device)
c = np.array(fidl)
fidelity = []
for i in c[:,0]:
fidelity.append(i.item())
fidelity = np.array(fidelity)
return fidelity
def plot_prediction_map(epoch):
fidelity = get_fidelity()
Z = fidelity.reshape(grid_resolution, grid_resolution)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
cont = ax.contourf(X_grid,Y_grid,Z, 20, cmap='magma')
plt.ylim(-1, 1)
cont.set_clim(-0.4, 1.5)
plt.show()
#============================================================================
# Mapping Parameters
# ==========================================================================
grid_resolution = 21
x_path = torch.linspace(-1,1,grid_resolution).to(device)
y_path = torch.linspace(-1,1,grid_resolution).to(device)
X_grid, Y_grid = torch.meshgrid(x_path.cpu(), y_path.cpu())
X = []
for i in range(len(x_path)):
for j in range(len(y_path)):
X.append([x_path[i], y_path[j], 0])
X_map = torch.tensor(X).to(device)
if torch.cuda.is_available():
device = torch.device('cuda:0')
else:
device = "cpu"
width = 2
layers = 4
batch_size = 16
epochs = 100
train_samples = 100
test_samples = 100
epsilon = 0
# ==========================================================================
# Data Generating
# ==========================================================================
X, y_train = generate_circle_data(train_samples)
X_train = np.zeros((train_samples,3))
X_train[:,0:2] = X[:] # add extra dim x_3 = 0
X, y_test = generate_circle_data(test_samples)
X_test = np.zeros((test_samples,3))
X_test[:,0:2] = X[:]
X_train = torch.tensor(X_train).to(device)
y_train = torch.tensor(y_train).to(device)
X_test = torch.tensor(X_test).to(device)
y_test = torch.tensor(y_test).to(device)
# =================================================================================
# Main Training
# =================================================================================
print("start training width{} layers{} epochs{}".format(width, layers, epochs))
init = torch.zeros(2**width).to(device)
init[0] = 1.0 # Define initial state
init = make_complex(init)
lr = 0.1
params = torch.randn((layers, width ,2 ,3), requires_grad=True, device=device)
optimizer = torch.optim.SGD([params], lr=lr)
model = Model_Z_measure(params, width = width, layers = layers, device=device)
# circ = model.build_circuit(params, x=torch.randn((3)))
x = torch.randn((3))
circ = model.build_circuit
print("circuit built")
progress = tqdm(range(epochs), bar_format='{desc}')
loss_list = []
for i in progress:
index = torch.randperm(len(X_train))
plot_prediction_map(i)
X = X_train[index][0:10] # take random sample of X and y (bc it is faster)
y = y_train[index][0:10]
loss = test_loss(circ, params, X, y, init, measurements=model.measures, device=device)
ev, H = find_heigenvalues(loss, params)
plt.plot(ev)
plt.title("Hessian eigenvalues")
plt.show()
batch_idx = 0
for Xbatch, ybatch in iterate_minibatches(X_train[index], y_train[index], batch_size=batch_size):
if batch_idx%10 == 0:
print("epoch: ", i, "batch: ", batch_idx)
batch_idx += 1
loss = test_loss(circ, params, Xbatch, ybatch, init, measurements=model.measures, device=device)
optimizer.zero_grad()
loss.backward(retain_graph=True)
optimizer.step()
print("Calc Hessian")
loss_list.append(loss.item())
optimizer = torch.optim.SGD([params], lr=lr)
progress.set_description_str('loss: {:3f}'.format(loss))
###Output
start training width2 layers4 epochs100
circuit built
|
02 and 03 -improving Deep Neural Networks Hyperparamete/week5/Regularization/Regularization_v2a.ipynb
|
###Markdown
RegularizationWelcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!**You will learn to:** Use regularization in your deep learning models.Let's first import the packages you are going to use. Updates to Assignment If you were working on a previous version* The current notebook filename is version "2a". * You can find your work in the file directory as version "2".* To see the file directory, click on the Coursera logo at the top left of the notebook. List of Updates* Clarified explanation of 'keep_prob' in the text description.* Fixed a comment so that keep_prob and 1-keep_prob add up to 100%* Updated print statements and 'expected output' for easier visual comparisons.
###Code
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
###Output
_____no_output_____
###Markdown
**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head. **Figure 1** : **Football field** The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head They give you the following 2D dataset from France's past 10 games.
###Code
train_X, train_Y, test_X, test_Y = load_2D_dataset()
###Output
_____no_output_____
###Markdown
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.- If the dot is blue, it means the French player managed to hit the ball with his/her head- If the dot is red, it means the other team's player hit the ball with their head**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball. **Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well. You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem. 1 - Non-regularized modelYou will use the following neural network (already implemented for you below). This model can be used:- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python. - in *dropout mode* -- by setting the `keep_prob` to a value less than oneYou will first try the model without any regularization. Then, you will implement:- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
###Code
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
###Code
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6557412523481002
Cost after iteration 10000: 0.16329987525724213
Cost after iteration 20000: 0.13851642423253263
###Markdown
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
###Code
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting. 2 - L2 RegularizationThe standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$To:$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$Let's modify your cost and observe the consequences.**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :```pythonnp.sum(np.square(Wl))```Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
###Code
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = (lambd/(2*m))*(np.sum(np.square(W1))+np.sum(np.square(W2))+np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
###Output
cost = 1.78648594516
###Markdown
**Expected Output**: **cost** 1.78648594516 Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost. **Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
###Code
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd/m)*((W3))
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd/m)*((W2))
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + (lambd/m)*((W1))
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = \n"+ str(grads["dW1"]))
print ("dW2 = \n"+ str(grads["dW2"]))
print ("dW3 = \n"+ str(grads["dW3"]))
###Output
dW1 =
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
dW2 =
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
dW3 =
[[-1.77691347 -0.11832879 -0.09397446]]
###Markdown
**Expected Output**:```dW1 = [[-0.25604646 0.12298827 -0.28297129] [-0.17706303 0.34536094 -0.4410571 ]]dW2 = [[ 0.79276486 0.85133918] [-0.0957219 -0.01720463] [-0.13100772 -0.03750433]]dW3 = [[-1.77691347 -0.11832879 -0.09397446]]``` Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call: - `compute_cost_with_regularization` instead of `compute_cost`- `backward_propagation_with_regularization` instead of `backward_propagation`
###Code
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6974484493131264
Cost after iteration 10000: 0.2684918873282239
Cost after iteration 20000: 0.26809163371273015
###Markdown
Congrats, the test set accuracy increased to 93%. You have saved the French football team!You are not overfitting the training data anymore. Let's plot the decision boundary.
###Code
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.**What is L2-regularization actually doing?**:L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes. **What you should remember** -- the implications of L2-regularization on:- The cost computation: - A regularization term is added to the cost- The backpropagation function: - There are extra terms in the gradients with respect to weight matrices- Weights end up smaller ("weight decay"): - Weights are pushed to smaller values. 3 - DropoutFinally, **dropout** is a widely used regularization technique that is specific to deep learning. **It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!<!--To understand drop-out, consider this conversation with a friend:- Friend: "Why do you need all these neurons to train your network and classify images?". - You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."!--> Figure 2 : Drop-out on the second hidden layer. At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. Figure 3 : Drop-out on the first and third hidden layers. $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time. 3.1 - Forward propagation with dropout**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer. **Instructions**:You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.2. Set each entry of $D^{[1]}$ to be 1 with probability (`keep_prob`), and 0 otherwise.**Hint:** Let's say that keep_prob = 0.8, which means that we want to keep about 80% of the neurons and drop out about 20% of them. We want to generate a vector that has 1's and 0's, where about 80% of them are 1 and about 20% are 0.This python statement: `X = (X < keep_prob).astype(int)` is conceptually the same as this if-else statement (for the simple case of a one-dimensional array) :```for i,v in enumerate(x): if v < keep_prob: x[i] = 1 else: v >= keep_prob x[i] = 0```Note that the `X = (X < keep_prob).astype(int)` works with multi-dimensional arrays, and the resulting output preserves the dimensions of the input array.Also note that without using `.astype(int)`, the result is an array of booleans `True` and `False`, which Python automatically converts to 1 and 0 if we multiply it with numbers. (However, it's better practice to convert data into the data type that we intend, so try using `.astype(int)`.)3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
###Code
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0],A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = (D1 < keep_prob).astype(int) # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1*D1 # Step 3: shut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0],A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = (D2 < keep_prob).astype(int) # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2*D2 # Step 3: shut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
###Output
A3 = [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
###Markdown
**Expected Output**: **A3** [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]] 3.2 - Backward propagation with dropout**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache. **Instruction**:Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`. 2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
###Code
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = dA2*D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = dA1*D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = \n" + str(gradients["dA1"]))
print ("dA2 = \n" + str(gradients["dA2"]))
###Output
dA1 =
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
dA2 =
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
###Markdown
**Expected Output**: ```dA1 = [[ 0.36544439 0. -0.00188233 0. -0.17408748] [ 0.65515713 0. -0.00337459 0. -0. ]]dA2 = [[ 0.58180856 0. -0.00299679 0. -0.27715731] [ 0. 0.53159854 -0. 0.53159854 -0.34089673] [ 0. 0. -0.00292733 0. -0. ]]``` Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function `model()` will now call:- `forward_propagation_with_dropout` instead of `forward_propagation`.- `backward_propagation_with_dropout` instead of `backward_propagation`.
###Code
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6543912405149825
###Markdown
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you! Run the code below to plot the decision boundary.
###Code
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
|
cnn_vis/visualize_saliency.ipynb
|
###Markdown
Reference: https://arxiv.org/pdf/1312.6034.pdf
###Code
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import cv2
model = tf.keras.applications.vgg16.VGG16(include_top=True, weights="imagenet")
model.summary()
image = np.array(tf.keras.preprocessing.image.load_img(path="./images/labrador.jpeg",target_size=(224,224)))
plt.imshow(image)
plt.show()
preprocessed_image = tf.keras.applications.vgg16.preprocess_input(image.astype(np.float32))
preprocessed_image = np.expand_dims(preprocessed_image, axis=0) # reshape it to (1,224,224,3),
# trick to get optimal visualizations swap softmax with linear
model.get_layer("predictions").activation = None
def get_gradients(model, image, class_index):
image_tensor = tf.convert_to_tensor(image, dtype="float32")
with tf.GradientTape() as tape:
tape.watch(image_tensor)
output = model(image_tensor)
loss = tf.reduce_mean(output[:, class_index])
grads = tape.gradient(loss, image_tensor)
return grads
class_index = 208 # labrador
grads = get_gradients(model, preprocessed_image, class_index=class_index)
gradient_image = grads.numpy()[0]
gradient_image.shape
saliency_map = np.max(np.abs(gradient_image), axis=2)
saliency_map.shape
plt.imshow(saliency_map)
plt.show()
###Output
_____no_output_____
|
EuroPython/cspythonv1.ipynb
|
###Markdown
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Introduction to Control Systems v1a Hello everyone, in this occasion I would like to share my notebook that I used to create the poster for this year EuroPython.Don't forget to follow me on github then :) Install the require library first, if you already installed it then skip to the next section.
###Code
!pip install control
!pip install slycot
!pip install scipy
!pip install numpy
!pip install matplotlib
###Output
Collecting control
[?25l Downloading https://files.pythonhosted.org/packages/e8/b0/32a903138505dd4ea523f8a3fc156c4272aa58b10100ef24ff74ced2fae8/control-0.8.3.tar.gz (249kB)
[K |█▎ | 10kB 15.5MB/s eta 0:00:01
[K |██▋ | 20kB 2.0MB/s eta 0:00:01
[K |████ | 30kB 2.6MB/s eta 0:00:01
[K |█████▎ | 40kB 3.0MB/s eta 0:00:01
[K |██████▋ | 51kB 2.4MB/s eta 0:00:01
[K |███████▉ | 61kB 2.7MB/s eta 0:00:01
[K |█████████▏ | 71kB 2.9MB/s eta 0:00:01
[K |██████████▌ | 81kB 3.2MB/s eta 0:00:01
[K |███████████▉ | 92kB 3.3MB/s eta 0:00:01
[K |█████████████▏ | 102kB 3.3MB/s eta 0:00:01
[K |██████████████▍ | 112kB 3.3MB/s eta 0:00:01
[K |███████████████▊ | 122kB 3.3MB/s eta 0:00:01
[K |█████████████████ | 133kB 3.3MB/s eta 0:00:01
[K |██████████████████▍ | 143kB 3.3MB/s eta 0:00:01
[K |███████████████████▊ | 153kB 3.3MB/s eta 0:00:01
[K |█████████████████████ | 163kB 3.3MB/s eta 0:00:01
[K |██████████████████████▎ | 174kB 3.3MB/s eta 0:00:01
[K |███████████████████████▋ | 184kB 3.3MB/s eta 0:00:01
[K |█████████████████████████ | 194kB 3.3MB/s eta 0:00:01
[K |██████████████████████████▎ | 204kB 3.3MB/s eta 0:00:01
[K |███████████████████████████▋ | 215kB 3.3MB/s eta 0:00:01
[K |████████████████████████████▉ | 225kB 3.3MB/s eta 0:00:01
[K |██████████████████████████████▏ | 235kB 3.3MB/s eta 0:00:01
[K |███████████████████████████████▌| 245kB 3.3MB/s eta 0:00:01
[K |████████████████████████████████| 256kB 3.3MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from control) (1.18.5)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from control) (1.4.1)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from control) (3.2.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->control) (1.2.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->control) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->control) (2.4.7)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->control) (2.8.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib->control) (1.12.0)
Building wheels for collected packages: control
Building wheel for control (setup.py) ... [?25l[?25hdone
Created wheel for control: filename=control-0.8.3-py2.py3-none-any.whl size=260982 sha256=0a6c52a9e379f850f1dbd04c38b1d1cea598de1914909142665e138bafb36f0f
Stored in directory: /root/.cache/pip/wheels/c2/d9/cc/90b28cb139a6320a3af2285428b6da87eee8d8920c78bb0223
Successfully built control
Installing collected packages: control
Successfully installed control-0.8.3
Collecting slycot
[?25l Downloading https://files.pythonhosted.org/packages/85/21/4e7110462f3529b2fbcff8a519b61bf64e0604b8fcbe9a07649c9bed9d7a/slycot-0.4.0.0.tar.gz (1.5MB)
[K |████████████████████████████████| 1.6MB 3.4MB/s
[?25h Installing build dependencies ... [?25l[?25hdone
Getting requirements to build wheel ... [?25l[?25hdone
Preparing wheel metadata ... [?25l[?25hdone
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from slycot) (1.18.5)
Building wheels for collected packages: slycot
Building wheel for slycot (PEP 517) ... [?25l[?25hdone
Created wheel for slycot: filename=slycot-0.4.0-cp36-cp36m-linux_x86_64.whl size=1413148 sha256=056b26702cf834f59b6978482606dd82235755a9573fa98f837e577056f6b59f
Stored in directory: /root/.cache/pip/wheels/a2/46/56/f82cbb2fd06556f4f3952a2eb2396e8fd29264fffecbaad3cf
Successfully built slycot
Installing collected packages: slycot
Successfully installed slycot-0.4.0
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (1.4.1)
Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from scipy) (1.18.5)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (1.18.5)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (3.2.2)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.8.1)
Requirement already satisfied: numpy>=1.11 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (1.18.5)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (1.2.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.4.7)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib) (1.12.0)
###Markdown
Caveat: * Sorry for the array mess because the python control systems library always print that kind of array and I still try to figure out the way to get rid of it.* In case the equation got broken or not displayed properly try to visit the button below to view the notebook immediately from the Google Colab. Tutorial on Control Systems Design with Python The LTI (Linear Time-Invariant) were assumed to be used here because nonlinear or other complex systems are difficult to design and need a more advanced understanding of the control systems field. Library Importing First of all, we need to import several essential libraries for designing the control systems, as listed below
###Code
import control # This is python control library (https://python-control.readthedocs.io/en/latest/intro.html)
import matplotlib.pyplot as plt
import numpy as np
import scipy
from control.matlab import * # To import matlab like function in designing control systems
###Output
_____no_output_____
###Markdown
Defining Transfer Function Let's assume we have an arbitrary transfer function equal toContinuous Transfer Function\begin{align}\frac{s}{s^2 + 2s + 6}\end{align}Discrete Transfer Function\begin{align}\frac{z}{z^2 + 2z + 6}\end{align} Import the Function from Python Control
###Code
from control import TransferFunction, pole, zero # Transfer Function function import
###Output
_____no_output_____
###Markdown
Python Control Function
###Code
# Continuous Time Systems
s = TransferFunction.s
sysc = s / (s**2 + 2*s + 6)
# Discrete Time Systems
z = TransferFunction.z
sysd = z / (z**2 + 2*z + 6)
###Output
_____no_output_____
###Markdown
MATLAB like Function
###Code
# Continuous Time Systems
s = tf('s')
sysc = s/(s**2 + 2*s + 6)
# Discrete Time Systems
z = tf('z')
sysd = z / (z**2 + 2*z + 6)
###Output
_____no_output_____
###Markdown
Stability CheckIn order to get the specified output, the various parameters of the system must be controlled. Along with this, the system must be stable enough so that the output must not get affected by the undesirable variations in the parameter of the system or disturbances.Thus we can say that a stable system is designed so as to get the desired response of the system without any intolerable variation with the changes in the system parameters.Source: https://electronicscoach.com/stability-of-control-system.htmlSource: https://flylib.com/books/en/2.729.1/the_z_transform.htmlIn the continuous systems, it called unstable if* The system poles were located on right half plane of s-plane In the discrete systems, it caled unstable if* The systems poles were located outside the unitary circle
###Code
# The function of python control libraries are the same as MATLAB
# From this we can analyze the systems stability by find out the poles
# Continuous Systems
print('Continuous Systems')
# The poles
pc = pole(sysc)
zc = zero(sysc)
print('Pole of The Systems'),
print(pc),
print()
# Discrete Systems
print('Discrete systems')
# The poles
pd = pole(sysd)
zd = zero(sysd)
print('Pole of The Systems'),
print(pd)
###Output
Continuous Systems
Pole of The Systems
[-1.+2.23606798j -1.-2.23606798j]
Discrete systems
Pole of The Systems
[-1.+2.23606798j -1.-2.23606798j]
###Markdown
Defining State Space Matrix of System Importing The Main Function
###Code
from control import StateSpace
###Output
_____no_output_____
###Markdown
Convert Transfer Function to State Space Form and Vice Versa
###Code
# In this case Python Control Function as same as MATLAB Like Function
from control import tf2ss
sysc = tf2ss(sysc)
sysd = tf2ss(sysd)
sysc
# Assume we have systems as below
A = np.array([[-2, -6], [1, 0]])
B = np.array([[-1], [0]])
C = np.array([[-1, 0]])
D = np.array([[0]])
###Output
_____no_output_____
###Markdown
Python Control Function
###Code
# Continuous Systems
sysc = StateSpace(A,B,C,D)
sysc
# Discrete Systems
ts = 0.01 # The Sampling Time
sysd = StateSpace(A,B,C,D,ts)
sysd
###Output
_____no_output_____
###Markdown
MATLAB like Function
###Code
# Continuous Systems
sysc = ss(A,B,C,D)
sysc
# Discrete Systems
ts = 0.01 # The Sampling Time
sysd = ss(A,B,C,D,ts)
sysd
###Output
_____no_output_____
###Markdown
Stability CheckThe aim of stability check in state space form is same as in transfer function, it's to make the system is can reach the desired point or reference without any intolerable variation or change.The way to check it also the same but in this case instead of using pole, the stability is checked using eigenvalue of the state space matrix.
###Code
# Check the systems stability by viewing the eigenvalue
# Continuous Systems
eigs, eigvs = np.linalg.eig(sysc.A)
print('Continuous Systems Eigenvalues'),
print(eigs)
# Discrete Systems
eigd, eigvd = np.linalg.eig(sysd.A)
print('Discrete Systems Eigenvalues'),
print(eigd)
###Output
Continuous Systems Eigenvalues
[-1.+2.23606798j -1.-2.23606798j]
Discrete Systems Eigenvalues
[-1.+2.23606798j -1.-2.23606798j]
###Markdown
Controllability and Observability CheckThe intuition according to the controllability and observability checking* Controllability:In order to be able to do whatever we want with the given dynamic system under control input,the system must be controllable.* Observability:In order to see what is going on inside the system under observation,the system must be observable.Source: https://www.ece.rutgers.edu/~gajic/psfiles/chap5.pdf
###Code
from control import obsv, ctrb
# In this case the function for control libraries and MATLAB are same
# Continuous Systems
# Controllability Check
cc = ctrb(sysc.A, sysc.B)
rankcc = np.linalg.matrix_rank(cc)
print('Continuous Systems', '\n')
print('The Controllability Matrix'),
print(cc),
print('Rank of Controllability Matrix'),
print(rankcc),
# Observability Check
oc = obsv(sysc.A, sysc.C)
rankoc = np.linalg.matrix_rank(oc)
print('The Observability Matrix'),
print(oc),
print('Rank of Observability Matrix'),
print(rankoc),
print()
# Discrete Systems
# Controllability Check
cd = ctrb(sysd.A, sysc.B)
rankcd = np.linalg.matrix_rank(cd)
print('Discrete Systems', '\n')
print('The Controllability Matrix'),
print(cd),
print('Rank of Controllability Matrix'),
print(rankcd),
# Observability Check
od = obsv(sysd.A, sysc.C)
rankod = np.linalg.matrix_rank(od)
print('The Observability Matrix'),
print(od),
print('Rank of Observability Matrix'),
print(rankod)
###Output
Continuous Systems
The Controllability Matrix
[[-1. 2.]
[ 0. -1.]]
Rank of Controllability Matrix
2
The Observability Matrix
[[-1. 0.]
[ 2. 6.]]
Rank of Observability Matrix
2
Discrete Systems
The Controllability Matrix
[[-1. 2.]
[ 0. -1.]]
Rank of Controllability Matrix
2
The Observability Matrix
[[-1. 0.]
[ 2. 6.]]
Rank of Observability Matrix
2
###Markdown
Analyze the Control Systems
###Code
from control import bode_plot, nyquist_plot, root_locus, pzmap
###Output
_____no_output_____
###Markdown
Pole Zero Map
###Code
# Continuous
pzmap(sysc)
# Discrete
pzmap(sysd)
###Output
_____no_output_____
###Markdown
Root Locus
###Code
# Continuous
root_locus(sysc)
# Discrete
root_locus(sysd)
###Output
_____no_output_____
###Markdown
Bode Plot
###Code
# continuous
bode_plot(sysc)
# Discrete
bode_plot(sysd)
###Output
_____no_output_____
###Markdown
Nyquist Plot
###Code
# Continuous
nyquist_plot(sysc)
# Discrete
nyquist_plot(sysd)
###Output
/usr/local/lib/python3.6/dist-packages/control/statesp.py:516: UserWarning: freqresp: frequency evaluation above Nyquist frequency
warn("freqresp: frequency evaluation above Nyquist frequency")
###Markdown
Test the Systems Response Actually, we can analyze the response of systems from two approach, first time domain approach, then frequency domain approach Time Domain Approach Step Response
###Code
from control import step_response # Step Response function import
###Output
_____no_output_____
###Markdown
Python Control Function
###Code
# Continuous Time Systems
tc, yc = step_response(sysc)
plt.subplot(2,1,1)
plt.plot(tc,yc)
plt.title('Continuous Step Response')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
# Discrete Time Systems
td, yd = step_response(sysd)
plt.subplot(2,1,2)
plt.plot(td,yd)
plt.title('Discrete Step Response')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.tight_layout(1)
###Output
_____no_output_____
###Markdown
MATLAB like function
###Code
# Continuous Time Systems
yc, tc = step(sysc)
plt.subplot(2,1,1)
plt.plot(tc,yc)
plt.title('Continuous Step Response')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
# Discrete Time Systems
yd, td = step(sysd)
plt.subplot(2,1,2)
plt.plot(td,yd)
plt.title('Discrete Step Response')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.tight_layout(1)
###Output
_____no_output_____
###Markdown
Impulse Response
###Code
from control import impulse_response # Impulse Response function import
###Output
_____no_output_____
###Markdown
Python Control Function
###Code
# Continuous Time Systems
tc, yc = impulse_response(sysc)
plt.subplot(2,1,1)
plt.plot(tc,yc)
plt.title('Continuous Step Response')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
# Discrete Time Systems
td, yd = impulse_response(sysd)
plt.subplot(2,1,2)
plt.plot(td,yd)
plt.title('Discrete Step Response')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.tight_layout(1)
###Output
_____no_output_____
###Markdown
MATLAB like Function
###Code
# Continuous Time Systems
yc, tc = impulse(sysc)
plt.subplot(2,1,1)
plt.plot(tc,yc)
plt.title('Continuous Step Response')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
# Discrete Time Systems
yd, td = impulse(sysd)
plt.subplot(2,1,2)
plt.plot(td,yd)
plt.title('Discrete Step Response')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.tight_layout(1)
###Output
_____no_output_____
###Markdown
Frequency Domain Approach Frequency Response
###Code
from control import freqresp
# Continuous
freqresp(sysc, [10])
# Discrete
freqresp(sysd, [12])
###Output
_____no_output_____
###Markdown
Okay Then, Let's Simulate The Real World Systems! In this occasion, the design of the control algorithm was using MATLAB like function, so the code also compatible with MATLAB. In case you want to try the simulation then it will be also worked on MATLAB but you must revise it with slight several changes on the syntax. DC Motor Speed Control Source : http://ctms.engin.umich.edu/CTMS/index.php?example=MotorSpeed§ion=SystemModeling (CTMS, University of Michigan)**The controlling aim for this system is to drive the DC motor speed into the desired speed by regulating a voltage to the motor.**The system characteristics are continuous and linear in nature. This is because in this tutorial I only limit the system to Linear System.As well, in this case the name for the controlling goal is tracking. Since, the goal of our control problem is to drive the DC motor speed to the desired speed.
###Code
J = 0.08; # The motor moment of inertia
b = 0.05;
K = 0.01;
R = 0.5;
L = 0.5;
s = tf('s');
sys_tf = K/((J*s+b)*(L*s+R)+K**2)
# Also change the form of transfer function into state space model for controlling through the modern control algorithm
sys_ss = tf2ss(sys_tf)
## stability Checking
pzmap(sys_ss)
pole(sys_tf)
# Stability Checking
eig = np.linalg.eig(sys_ss.A)
print('System Eigenvalue'),
print(eig[0], "\n"),
# Controllability Check
ctrM = ctrb(sys_ss.A, sys_ss.B)
rankCtr = np.linalg.matrix_rank(ctrM)
print('Controllability Rank'),
print(rankCtr, "\n"),
# Observabiity Check
obsM = obsv(sys_ss.A, sys_ss.C)
rankObs = np.linalg.matrix_rank(obsM)
print('Observability Rank'),
print(rankObs)
###Output
System Eigenvalue
[-0.9932104 -0.6317896]
Controllability Rank
2
Observability Rank
2
###Markdown
It's stable due to pole or the eigenvalue located in left half plane. The system were controllable and observable because both of controllability and observability matrix have a full rank. PID Controller The equation of PID controller is denoted below,\begin{align}u(t) = K_p*e(t) + K_d*\frac{de(t)}{dt} + K_i * \int{e(t)} \end{align}Where Kp, Ki, Kd, e(t) are proportional gain, integral gain, derivative gain, and system error, respectively.If we want to write it as transfer function form then, \begin{align}U(s)=K_p*E(s) + K_d * sE(s) + K_i * \frac{1}{s} * E(s) \end{align}If we assume\begin{align}\tau_i = \frac{K_p}{K_i} ;\tau_d = \frac{K_d}{K_p}\end{align}Thus, the final equation for the controller is,\begin{align}U(s) = K_p * E(s)(1 + s * \tau_d + \frac{1}{s*\tau_i})\end{align}In this test I select the gain arbitrary as follows, Kp = 160, Ki = 4, and Kd = 80. As well try to experiment with the other gain combination to see the effect of gain changing.
###Code
Kp = 160; # The proportional gain
Ki = 4; # The integral gain
Kd = 80; # The derivative gain
Ti = Kp / Ki
Td = Kd / Kp
pid = Kp * (1 + 1 / (Ti * s) + Td * s)
isys = feedback(sys_tf * pid, 1)
t = np.linspace(0, 10)
y, t = step(isys, t)
plt.plot(t, y)
plt.title('PID Controller Response')
plt.xlabel('Time (s)')
plt.ylabel('Motor Speed (RPM)')
###Output
_____no_output_____
###Markdown
State Feedback Controller (Pole Placement Method) In this design, I would like to place all of the two pole to the left half plane coordinate and place it to specific pole location as this (-3, -4). You can also try the other combination to test how the system response change
###Code
desiredPole = np.array([-3,-4])
ppGain = place(sys_ss.A, sys_ss.B, desiredPole)
feedbackMech = sys_ss.A - sys_ss.B * ppGain
newSys = ss(feedbackMech, sys_ss.B, sys_ss.C, 0)
t = np.linspace(0,10)
scaleSF = 1 / 0.02083333 # Because there are large steady state error we should a precompensator for scaling the reference signal
yref = 20*(np.sin(2*np.pi*t*0.1))
y, t = step(newSys * scaleSF, t)
plt.plot(t,y)
plt.title('State-Feedback Controller Response')
plt.xlabel('Time (s)')
plt.ylabel('Motor Speed (RPM)')
###Output
_____no_output_____
###Markdown
LQR Controller LQR (Linear Quadratic Regulator) is the one of Optimal Control Algorithm variant, it uses calculus variation such as Algebraic Riccati Equation (ARE) to determine the optimal gain for the controller.
###Code
# Defining the LQR Parameter
Q = np.array([[0,0],
[0,100]])
R = 10
gainLQR, X, E = lqr(sys_ss.A, sys_ss.B, Q, R)
feedbackMech = sys_ss.A - sys_ss.B * gainLQR
newSysqr = ss(feedbackMech, sys_ss.B, sys_ss.C, 0)
t = np.linspace(0,10)
scaleLQR = 1 / 0.07754505 # Because there are large steady state error we should a precompensator for scaling the reference signal
y,t = step(newSysqr * scaleLQR, t)
plt.plot(t,y)
plt.title('LQR Controller Response')
plt.xlabel('Time (s)')
plt.ylabel('Motor Speed (RPM)')
###Output
_____no_output_____
###Markdown
Compare it Together Before we can use the variations of reference signal such as square wave, sinusoidal signal, etc. We ought to import the forced_response function first from the Python Control Systems Library and the signal function from SciPy library
###Code
from control import forced_response
from scipy import signal
###Output
_____no_output_____
###Markdown
Step Speed Reference of 1200 RPM
###Code
# The Step Signal with 1500 RPM amplitude
maxSim = 10 # The simulation time is 10 second
t = np.linspace(0, maxSim)
amp = 1200 # Because the reference signal is on 1500 RPM
ref = amp * np.ones(np.shape(t))
isys # The transfer function
y1 = forced_response(isys, T = t, U = ref)
res1 = y1[1]
y2 = forced_response(newSys * scaleSF, T = t, U = ref)
res2 = y2[1]
y3 = forced_response(newSysqr * scaleLQR, T = t, U = ref)
res3 = y3[1]
plt.plot(t,ref)
plt.plot(t,res1)
plt.plot(t,res2)
plt.plot(t,res3)
plt.title('1200 RPM Speed Reference')
plt.xlabel('Time (s)')
plt.ylabel('Motor Speed (RPM)')
plt.legend(['Reference','PID', 'State Feedback','LQR'])
from google.colab import files
plt.savefig('step.png',type = 'png',DPI = 1200)
files.download('step.png')
###Output
_____no_output_____
###Markdown
Sinusoidal Speed Reference
###Code
# The Sinusoidal Signal with 1000 RPM amplitude
maxSim = 10 # The simulation time is 10 second
t = np.linspace(0, maxSim)
amp = 1000 # Because the reference signal is on 1500 RPM
f = 0.1 # The sinusoidal signal frequency
ref = amp * np.sin(2 * np.pi * f * t)
isys # The transfer function
y1 = forced_response(isys, T = t, U = ref)
res1 = y1[1]
y2 = forced_response(newSys * scaleSF, T = t, U = ref)
res2 = y2[1]
y3 = forced_response(newSysqr * scaleLQR, T = t, U = ref)
res3 = y3[1]
plt.plot(t,ref)
plt.plot(t,res1)
plt.plot(t,res2)
plt.plot(t,res3)
plt.title('Sinusoidal Speed Reference')
plt.xlabel('Time (s)')
plt.ylabel('Motor Speed (RPM)')
plt.legend(['Reference','PID', 'State Feedback','LQR'])
plt.savefig('')
###Output
_____no_output_____
###Markdown
Square Speed Reference
###Code
# The Square Signal with 500 RPM amplitude
maxSim = 10 # The simulation time is 10 second
t = np.linspace(0, maxSim, endpoint = True)
amp = 500 # Because the reference signal is on 500 RPM
f = 0.1 # The square signal frequency in Hz
ref = amp * signal.square(2 * np.pi * f * t)
isys # The transfer function
y1 = forced_response(isys, T = t, U = ref)
res1 = y1[1]
y2 = forced_response(newSys * scaleSF, T = t, U = ref)
res2 = y2[1]
y3 = forced_response(newSysqr * scaleLQR, T = t, U = ref)
res3 = y3[1]
plt.plot(t,ref)
plt.plot(t,res1)
plt.plot(t,res2)
plt.plot(t,res3)
plt.title('Square Wave Speed Reference')
plt.xlabel('Time (s)')
plt.ylabel('Motor Speed (RPM)')
plt.legend(['Reference','PID', 'State Feedback','LQR'])
plt.savefig('square.png',type = 'png',DPI = 1200)
files.download('square.png')
###Output
_____no_output_____
###Markdown
Sawtooth Speed Reference
###Code
# The Sawtooth Signal with 800 RPM amplitude
maxSim = 20 # The simulation time is 20 second
t = np.linspace(0, maxSim)
amp = 800 # Because the reference signal is on 800 RPM
f = 0.2 # The sawtooth signal frequency
ref = amp * signal.sawtooth(2 * np.pi * f * t)
y1 = forced_response(isys, T = t, U = ref)
res1 = y1[1]
y2 = forced_response(newSys * scaleSF, T = t, U = ref)
res2 = y2[1]
y3 = forced_response(newSysqr * scaleLQR, T = t, U = ref)
res3 = y3[1]
plt.plot(t,ref)
plt.plot(t,res1)
plt.plot(t,res2)
plt.plot(t,res3)
plt.title('Sawtooth Signal Speed Reference')
plt.xlabel('Time (s)')
plt.ylabel('Motor Speed (RPM)')
plt.legend(['Reference','PID', 'State Feedback','LQR'])
###Output
_____no_output_____
###Markdown
More Advanced Control Systems Design (Optional) Coming Soon on V2 Robust Control Design
###Code
###Output
_____no_output_____
###Markdown
Introduction to Control Systems Hello everyone, in this occasion I would like to share my notebook that I used to create the poster for this year EuroPython.Don't forget to follow me on github then :) Install the require library first, if you already installed it then skip to the next section.
###Code
!pip install control
!pip install slycot
!pip install scipy
!pip install numpy
!pip install matplotlib
###Output
Requirement already satisfied: control in /usr/local/lib/python3.6/dist-packages (0.8.3)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from control) (3.2.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from control) (1.18.5)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from control) (1.4.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->control) (1.2.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->control) (2.4.7)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->control) (2.8.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->control) (0.10.0)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.1->matplotlib->control) (1.12.0)
Requirement already satisfied: slycot in /usr/local/lib/python3.6/dist-packages (0.4.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from slycot) (1.18.5)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (1.4.1)
Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from scipy) (1.18.5)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (1.18.5)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (3.2.2)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (1.2.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.8.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.4.7)
Requirement already satisfied: numpy>=1.11 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (1.18.5)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib) (1.12.0)
###Markdown
Caveat: * Sorry for the array mess because the python control systems library always print that kind of array and I still try to figure out the way to get rid of it.* In case the equation got broken or not displayed properly try to visit this link https://colab.research.google.com/drive/1S-q44-BDln7r14vhqifOalrsseLBfyVh?usp=sharing to view the notebook immediately from the Google Colab. Tutorial on Control Systems Design with Python The LTI (Linear Time-Invariant) were assumed to be used here because nonlinear or other complex systems are difficult to design and need a more advanced understanding of the control systems field. Library Importing First of all, we need to import several essential libraries for designing the control systems, as listed below
###Code
import control # This is python control library (https://python-control.readthedocs.io/en/latest/intro.html)
import matplotlib.pyplot as plt
import numpy as np
import scipy
from control.matlab import * # To import matlab like function in designing control systems
###Output
_____no_output_____
###Markdown
Defining Transfer Function Let's assume we have an arbitrary transfer function equal toContinuous Transfer Function\begin{align}\frac{s}{s^2 + 2s + 6}\end{align}Discrete Transfer Function\begin{align}\frac{z}{z^2 + 2z + 6}\end{align} Import the Function from Python Control
###Code
from control import TransferFunction, pole, zero # Transfer Function function import
###Output
_____no_output_____
###Markdown
Python Control Function
###Code
# Continuous Time Systems
s = TransferFunction.s
sysc = s / (s**2 + 2*s + 6)
# Discrete Time Systems
z = TransferFunction.z
sysd = z / (z**2 + 2*z + 6)
###Output
_____no_output_____
###Markdown
MATLAB like Function
###Code
# Continuous Time Systems
s = tf('s')
sysc = s/(s**2 + 2*s + 6)
# Discrete Time Systems
z = tf('z')
sysd = z / (z**2 + 2*z + 6)
###Output
_____no_output_____
###Markdown
Stability CheckIn order to get the specified output, the various parameters of the system must be controlled. Along with this, the system must be stable enough so that the output must not get affected by the undesirable variations in the parameter of the system or disturbances.Thus we can say that a stable system is designed so as to get the desired response of the system without any intolerable variation with the changes in the system parameters.Source: https://electronicscoach.com/stability-of-control-system.htmlSource: https://flylib.com/books/en/2.729.1/the_z_transform.htmlIn the continuous systems, it called unstable if* The system poles were located on right half plane of s-plane In the discrete systems, it caled unstable if* The systems poles were located outside the unitary circle
###Code
# The function of python control libraries are the same as MATLAB
# From this we can analyze the systems stability by find out the poles
# Continuous Systems
print('Continuous Systems')
# The poles
pc = pole(sysc)
zc = zero(sysc)
print('Pole of The Systems'),
print(pc),
print()
# Discrete Systems
print('Discrete systems')
# The poles
pd = pole(sysd)
zd = zero(sysd)
print('Pole of The Systems'),
print(pd)
###Output
Continuous Systems
Pole of The Systems
[-1.+2.23606798j -1.-2.23606798j]
Discrete systems
Pole of The Systems
[-1.+2.23606798j -1.-2.23606798j]
###Markdown
Defining State Space Matrix of System Importing The Main Function
###Code
from control import StateSpace
###Output
_____no_output_____
###Markdown
Convert Transfer Function to State Space Form and Vice Versa
###Code
# In this case Python Control Function as same as MATLAB Like Function
from control import tf2ss
sysc = tf2ss(sysc)
sysd = tf2ss(sysd)
sysc
# Assume we have systems as below
A = np.array([[-2, -6], [1, 0]])
B = np.array([[-1], [0]])
C = np.array([[-1, 0]])
D = np.array([[0]])
###Output
_____no_output_____
###Markdown
Python Control Function
###Code
# Continuous Systems
sysc = StateSpace(A,B,C,D)
sysc
# Discrete Systems
ts = 0.01 # The Sampling Time
sysd = StateSpace(A,B,C,D,ts)
sysd
###Output
_____no_output_____
###Markdown
MATLAB like Function
###Code
# Continuous Systems
sysc = ss(A,B,C,D)
sysc
# Discrete Systems
ts = 0.01 # The Sampling Time
sysd = ss(A,B,C,D,ts)
sysd
###Output
_____no_output_____
###Markdown
Stability CheckThe aim of stability check in state space form is same as in transfer function, it's to make the system is can reach the desired point or reference without any intolerable variation or change.The way to check it also the same but in this case instead of using pole, the stability is checked using eigenvalue of the state space matrix.
###Code
# Check the systems stability by viewing the eigenvalue
# Continuous Systems
eigs, eigvs = np.linalg.eig(sysc.A)
print('Continuous Systems Eigenvalues'),
print(eigs)
# Discrete Systems
eigd, eigvd = np.linalg.eig(sysd.A)
print('Discrete Systems Eigenvalues'),
print(eigd)
###Output
Continuous Systems Eigenvalues
[-1.+2.23606798j -1.-2.23606798j]
Discrete Systems Eigenvalues
[-1.+2.23606798j -1.-2.23606798j]
###Markdown
Controllability and Observability CheckThe intuition according to the controllability and observability checking* Controllability:In order to be able to do whatever we want with the given dynamic system under control input,the system must be controllable.* Observability:In order to see what is going on inside the system under observation,the system must be observable.Source: https://www.ece.rutgers.edu/~gajic/psfiles/chap5.pdf
###Code
from control import obsv, ctrb
# In this case the function for control libraries and MATLAB are same
# Continuous Systems
# Controllability Check
cc = ctrb(sysc.A, sysc.B)
rankcc = np.linalg.matrix_rank(cc)
print('Continuous Systems', '\n')
print('The Controllability Matrix'),
print(cc),
print('Rank of Controllability Matrix'),
print(rankcc),
# Observability Check
oc = obsv(sysc.A, sysc.C)
rankoc = np.linalg.matrix_rank(oc)
print('The Observability Matrix'),
print(oc),
print('Rank of Observability Matrix'),
print(rankoc),
print()
# Discrete Systems
# Controllability Check
cd = ctrb(sysd.A, sysc.B)
rankcd = np.linalg.matrix_rank(cd)
print('Discrete Systems', '\n')
print('The Controllability Matrix'),
print(cd),
print('Rank of Controllability Matrix'),
print(rankcd),
# Observability Check
od = obsv(sysd.A, sysc.C)
rankod = np.linalg.matrix_rank(od)
print('The Observability Matrix'),
print(od),
print('Rank of Observability Matrix'),
print(rankod)
###Output
Continuous Systems
The Controllability Matrix
[[-1. 2.]
[ 0. -1.]]
Rank of Controllability Matrix
2
The Observability Matrix
[[-1. 0.]
[ 2. 6.]]
Rank of Observability Matrix
2
Discrete Systems
The Controllability Matrix
[[-1. 2.]
[ 0. -1.]]
Rank of Controllability Matrix
2
The Observability Matrix
[[-1. 0.]
[ 2. 6.]]
Rank of Observability Matrix
2
###Markdown
Analyze the Control Systems
###Code
from control import bode_plot, nyquist_plot, root_locus, pzmap
###Output
_____no_output_____
###Markdown
Pole Zero Map
###Code
# Continuous
pzmap(sysc)
# Discrete
pzmap(sysd)
###Output
_____no_output_____
###Markdown
Root Locus
###Code
# Continuous
root_locus(sysc)
# Discrete
root_locus(sysd)
###Output
_____no_output_____
###Markdown
Bode Plot
###Code
# continuous
bode_plot(sysc)
# Discrete
bode_plot(sysd)
###Output
_____no_output_____
###Markdown
Nyquist Plot
###Code
# Continuous
nyquist_plot(sysc)
# Discrete
nyquist_plot(sysd)
###Output
/usr/local/lib/python3.6/dist-packages/control/statesp.py:516: UserWarning: freqresp: frequency evaluation above Nyquist frequency
warn("freqresp: frequency evaluation above Nyquist frequency")
###Markdown
Test the Systems Response Actually, we can analyze the response of systems from two approach, first time domain approach, then frequency domain approach Time Domain Approach Step Response
###Code
from control import step_response # Step Response function import
###Output
_____no_output_____
###Markdown
Python Control Function
###Code
# Continuous Time Systems
tc, yc = step_response(sysc)
plt.subplot(2,1,1)
plt.plot(tc,yc)
plt.title('Continuous Step Response')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
# Discrete Time Systems
td, yd = step_response(sysd)
plt.subplot(2,1,2)
plt.plot(td,yd)
plt.title('Discrete Step Response')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.tight_layout(1)
###Output
_____no_output_____
###Markdown
MATLAB like function
###Code
# Continuous Time Systems
yc, tc = step(sysc)
plt.subplot(2,1,1)
plt.plot(tc,yc)
plt.title('Continuous Step Response')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
# Discrete Time Systems
yd, td = step(sysd)
plt.subplot(2,1,2)
plt.plot(td,yd)
plt.title('Discrete Step Response')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.tight_layout(1)
###Output
_____no_output_____
###Markdown
Impulse Response
###Code
from control import impulse_response # Impulse Response function import
###Output
_____no_output_____
###Markdown
Python Control Function
###Code
# Continuous Time Systems
tc, yc = impulse_response(sysc)
plt.subplot(2,1,1)
plt.plot(tc,yc)
plt.title('Continuous Step Response')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
# Discrete Time Systems
td, yd = impulse_response(sysd)
plt.subplot(2,1,2)
plt.plot(td,yd)
plt.title('Discrete Step Response')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.tight_layout(1)
###Output
_____no_output_____
###Markdown
MATLAB like Function
###Code
# Continuous Time Systems
yc, tc = impulse(sysc)
plt.subplot(2,1,1)
plt.plot(tc,yc)
plt.title('Continuous Step Response')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
# Discrete Time Systems
yd, td = impulse(sysd)
plt.subplot(2,1,2)
plt.plot(td,yd)
plt.title('Discrete Step Response')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.tight_layout(1)
###Output
_____no_output_____
###Markdown
Frequency Domain Approach Frequency Response
###Code
from control import freqresp
# Continuous
freqresp(sysc, [10])
# Discrete
freqresp(sysd, [12])
###Output
_____no_output_____
###Markdown
Okay Then, Let's Simulate The Real World Systems! In this occasion, the design of the control algorithm was using MATLAB like function, so the code also compatible with MATLAB. In case you want to try the simulation then it will be also worked on MATLAB but you must revise it with slight several changes on the syntax. DC Motor Speed Control Source : http://ctms.engin.umich.edu/CTMS/index.php?example=MotorSpeed§ion=SystemModeling (CTMS, University of Michigan)**The controlling aim for this system is to drive the DC motor speed into the desired speed by regulating a voltage to the motor.**The system characteristics are continuous and linear in nature. This is because in this tutorial I only limit the system to Linear System.As well, in this case the name for the controlling goal is tracking. Since, the goal of our control problem is to drive the DC motor speed to the desired speed.
###Code
J = 0.08; # The motor moment of inertia
b = 0.05;
K = 0.01;
R = 0.5;
L = 0.5;
s = tf('s');
sys_tf = K/((J*s+b)*(L*s+R)+K**2)
# Also change the form of transfer function into state space model for controlling through the modern control algorithm
sys_ss = tf2ss(sys_tf)
## stability Checking
pzmap(sys_ss)
# Stability Checking
eig = np.linalg.eig(sys_ss.A)
print('System Eigenvalue'),
print(eig[0], "\n"),
# Controllability Check
ctrM = ctrb(sys_ss.A, sys_ss.B)
rankCtr = np.linalg.matrix_rank(ctrM)
print('Controllability Rank'),
print(rankCtr, "\n"),
# Observabiity Check
obsM = obsv(sys_ss.A, sys_ss.C)
rankObs = np.linalg.matrix_rank(obsM)
print('Observability Rank'),
print(rankObs)
###Output
System Eigenvalue
[-0.9932104 -0.6317896]
Controllability Rank
2
Observability Rank
2
###Markdown
It's not stable due to pole or the eigenvalue located in right half plane (5.622). The system were controllable and observable because both of controllability and observability matrix have a full rank. PID Controller The equation of PID controller is denoted below,\begin{align}u(t) = K_p*e(t) + K_d*\frac{de(t)}{dt} + K_i * \int{e(t)} \end{align}Where Kp, Ki, Kd, e(t) are proportional gain, integral gain, derivative gain, and system error, respectively.If we want to write it as transfer function form then, \begin{align}U(s)=K_p*E(s) + K_d * sE(s) + K_i * \frac{1}{s} * E(s) \end{align}If we assume\begin{align}\tau_i = \frac{K_p}{K_i} ;\tau_d = \frac{K_d}{K_p}\end{align}Thus, the final equation for the controller is,\begin{align}U(s) = K_p * E(s)(1 + s * \tau_d + \frac{1}{s*\tau_i})\end{align}In this test I select the gain arbitrary as follows, Kp = 160, Ki = 4, and Kd = 80. As well try to experiment with the other gain combination to see the effect of gain changing.
###Code
Kp = 160; # The proportional gain
Ki = 4; # The integral gain
Kd = 80; # The derivative gain
Ti = Kp / Ki
Td = Kd / Kp
pid = Kp * (1 + 1 / (Ti * s) + Td * s)
isys = feedback(sys_tf * pid, 1)
t = np.linspace(0, 10)
y, t = step(isys, t)
plt.plot(t, y)
plt.title('PID Controller Response')
plt.xlabel('Time (s)')
plt.ylabel('Motor Speed (RPM)')
###Output
_____no_output_____
###Markdown
State Feedback Controller (Pole Placement Method) In this design, I would like to place all of the two pole to the left half plane coordinate and place it to specific pole location as this (-3, -4). You can also try the other combination to test how the system response change
###Code
desiredPole = np.array([-3,-4])
ppGain = place(sys_ss.A, sys_ss.B, desiredPole)
feedbackMech = sys_ss.A - sys_ss.B * ppGain
newSys = ss(feedbackMech, sys_ss.B, sys_ss.C, 0)
t = np.linspace(0,10)
scaleSF = 1 / 0.02083333 # Because there are large steady state error we should a precompensator for scaling the reference signal
yref = 20*(np.sin(2*np.pi*t*0.1))
y, t = step(newSys * scaleSF, t)
plt.plot(t,y)
plt.title('State-Feedback Controller Response')
plt.xlabel('Time (s)')
plt.ylabel('Motor Speed (RPM)')
###Output
_____no_output_____
###Markdown
LQR Controller LQR (Linear Quadratic Regulator) is the one of Optimal Control Algorithm variant, it uses calculus variation such as Algebraic Riccati Equation (ARE) to determine the optimal gain for the controller.
###Code
# Defining the LQR Parameter
Q = np.array([[0,0],
[0,100]])
R = 10
gainLQR, X, E = lqr(sys_ss.A, sys_ss.B, Q, R)
feedbackMech = sys_ss.A - sys_ss.B * gainLQR
newSysqr = ss(feedbackMech, sys_ss.B, sys_ss.C, 0)
t = np.linspace(0,10)
scaleLQR = 1 / 0.07754505 # Because there are large steady state error we should a precompensator for scaling the reference signal
y,t = step(newSysqr * scaleLQR, t)
plt.plot(t,y)
plt.title('LQR Controller Response')
plt.xlabel('Time (s)')
plt.ylabel('Motor Speed (RPM)')
###Output
_____no_output_____
###Markdown
Compare it Together Before we can use the variations of reference signal such as square wave, sinusoidal signal, etc. We ought to import the forced_response function first from the Python Control Systems Library and the signal function from SciPy library
###Code
from control import forced_response
from scipy import signal
###Output
_____no_output_____
###Markdown
Step Speed Reference of 1200 RPM
###Code
# The Step Signal with 1500 RPM amplitude
maxSim = 10 # The simulation time is 10 second
t = np.linspace(0, maxSim)
amp = 1200 # Because the reference signal is on 1500 RPM
ref = amp * np.ones(np.shape(t))
isys # The transfer function
y1 = forced_response(isys, T = t, U = ref)
res1 = y1[1]
y2 = forced_response(newSys * scaleSF, T = t, U = ref)
res2 = y2[1]
y3 = forced_response(newSysqr * scaleLQR, T = t, U = ref)
res3 = y3[1]
plt.plot(t,ref)
plt.plot(t,res1)
plt.plot(t,res2)
plt.plot(t,res3)
plt.title('1200 RPM Speed Reference')
plt.xlabel('Time (s)')
plt.ylabel('Motor Speed (RPM)')
plt.legend(['Reference','PID', 'State Feedback','LQR'])
###Output
_____no_output_____
###Markdown
Sinusoidal Speed Reference
###Code
# The Sinusoidal Signal with 1000 RPM amplitude
maxSim = 10 # The simulation time is 10 second
t = np.linspace(0, maxSim)
amp = 1000 # Because the reference signal is on 1500 RPM
f = 0.1 # The sinusoidal signal frequency
ref = amp * np.sin(2 * np.pi * f * t)
isys # The transfer function
y1 = forced_response(isys, T = t, U = ref)
res1 = y1[1]
y2 = forced_response(newSys * scaleSF, T = t, U = ref)
res2 = y2[1]
y3 = forced_response(newSysqr * scaleLQR, T = t, U = ref)
res3 = y3[1]
plt.plot(t,ref)
plt.plot(t,res1)
plt.plot(t,res2)
plt.plot(t,res3)
plt.title('Sinusoidal Speed Reference')
plt.xlabel('Time (s)')
plt.ylabel('Motor Speed (RPM)')
plt.legend(['Reference','PID', 'State Feedback','LQR'])
###Output
_____no_output_____
###Markdown
Square Speed Reference
###Code
# The Square Signal with 500 RPM amplitude
maxSim = 10 # The simulation time is 10 second
t = np.linspace(0, maxSim, endpoint = True)
amp = 500 # Because the reference signal is on 500 RPM
f = 0.1 # The square signal frequency in Hz
ref = amp * signal.square(2 * np.pi * f * t)
isys # The transfer function
y1 = forced_response(isys, T = t, U = ref)
res1 = y1[1]
y2 = forced_response(newSys * scaleSF, T = t, U = ref)
res2 = y2[1]
y3 = forced_response(newSysqr * scaleLQR, T = t, U = ref)
res3 = y3[1]
plt.plot(t,ref)
plt.plot(t,res1)
plt.plot(t,res2)
plt.plot(t,res3)
plt.title('Square Wave Speed Reference')
plt.xlabel('Time (s)')
plt.ylabel('Motor Speed (RPM)')
plt.legend(['Reference','PID', 'State Feedback','LQR'])
###Output
_____no_output_____
###Markdown
Sawtooth Speed Reference
###Code
# The Sawtooth Signal with 800 RPM amplitude
maxSim = 20 # The simulation time is 20 second
t = np.linspace(0, maxSim)
amp = 800 # Because the reference signal is on 800 RPM
f = 0.2 # The sawtooth signal frequency
ref = amp * signal.sawtooth(2 * np.pi * f * t)
y1 = forced_response(isys, T = t, U = ref)
res1 = y1[1]
y2 = forced_response(newSys * scaleSF, T = t, U = ref)
res2 = y2[1]
y3 = forced_response(newSysqr * scaleLQR, T = t, U = ref)
res3 = y3[1]
plt.plot(t,ref)
plt.plot(t,res1)
plt.plot(t,res2)
plt.plot(t,res3)
plt.title('Sawtooth Signal Speed Reference')
plt.xlabel('Time (s)')
plt.ylabel('Motor Speed (RPM)')
plt.legend(['Reference','PID', 'State Feedback','LQR'])
###Output
_____no_output_____
###Markdown
More Advanced Control Systems Design (Optional) Coming Soon on V2 Robust Control Design
###Code
###Output
_____no_output_____
|
social_isolation_pyContextNLP.ipynb
|
###Markdown
6950
###Code
# !pip install PyRuSH
# !pip install pyConTextNLP
# !pip install textblob
# !pip install radnlp
###Output
_____no_output_____
###Markdown
Rules
###Code
sentence_rules='KB/rush_rules.tsv'
#target_rules='KB/social_kb.yml'
#context_rules='KB/general_modifiers.yml'
#feature_inference_rule='KB/featurer_inferences.csv'
#document_inference_rule='KB/doc_inferences.csv'
with open('KB/social_target_rules_042118.yml','r') as f: # 'KB/social_target_rules_040618.yml'
target_rules = f.read()
with open('KB/social_modifiers_2018.yml','r') as f: # KB/lexical_kb_05042016.yml , KB/general_modifiers_2018.yml
context_rules = f.read()
with open('KB/featurer_inferences.csv','r') as f:
feature_inference_rule = f.read()
with open('KB/doc_inferences.csv','r') as f:
document_inference_rule = f.read()
###Output
_____no_output_____
###Markdown
NLP pipeline
###Code
from pynlp_pipe import Mypipe
myPipe=Mypipe(sentence_rules, target_rules, context_rules, feature_inference_rule, document_inference_rule)
###Output
_____no_output_____
###Markdown
Read filenames
###Code
import os
#path ="test/2017/combine" # 2017 96+439
path = "test/500_1/corpus" # 500 notes
#path = "test/156_156/soc" # combine positive
files = os.listdir(path)
len(files)
###Output
_____no_output_____
###Markdown
Read documents and apply pipeline
###Code
import chardet
results=dict() # dictionary will contain document names as keys and a document-level classification as values.
context_doc_res=[]
# Read txt files
doc_texts = []
#note_count = 0 # count the number of text notes want to process ***
for i in files[:]:
if ".txt" in i:
#note_count = note_count + 1 #
#if note_count > 20: # count the number of text notes want to process ***
# break #
with open(os.path.join(path,i), 'rb') as f:
doc_txt = chardet.detect(f.read())
#print(i)
#print(result["encoding"])
with open(os.path.join(path,i),encoding=doc_txt["encoding"]) as f:
doc_text = f.read() # or readline if the file is large
doc_text=doc_text.replace('\n', ' ')
doc_class, context_doc, annotations, relations = myPipe.process(doc_text)
if len(annotations) != 0 and doc_class == 'no_isolation':
print (i)
results[i] = doc_class
context_doc_res.append(context_doc)
###Output
_____no_output_____
###Markdown
Print classification results
###Code
res_num=0
for j in results:
print(j, " : ", results[j])
res_num=res_num+1
if res_num > 5:
break
###Output
09696212_199472695_469775619.txt : isolation_doc
09910266_203543374_585135519.txt : isolation_doc
09914912_202033473_535253049.txt : isolation_doc
10882066_216071107_853913484.txt : no_isolation
10882066_216071107_859257422.txt : no_isolation
10882066_216071107_863455238.txt : no_isolation
###Markdown
Validation
###Code
posPath ="test/500_1/corpusp" # 500 notes
#posPath ="test/2017/isolation" # 2017 96+439
#posPath ="test/156_156/soc_pos" # combine positive
negPath = "test/500_1/corpusn" # 500 notes
#negPath = "test/2017/noisolation" # 2017 96+439
#negPath ="test/156_156/soc_neg" # combine positive
posLab = "isolation_doc"
negLab = "no_isolation"
from pynlp_valid import Validnote
validnote = Validnote()
std_doc = validnote.readstd(posPath, negPath, posLab, negLab)
precision, recall, f1 = validnote.validation(results, std_doc, posLab, negLab)
print("*"*20)
print("Precision: ", precision)
print("Recall: ", recall)
print("F1: ", f1)
with open(os.path.join(path,"14498364_203097610_559618901.txt")) as f:
doc_text = f.read()
doc_text=doc_text.replace('\n', ' ')
doc_class, context_doc, annotations, relations = myPipe.process(doc_text)
annotations
relations
###Output
_____no_output_____
|
PyCitySchools/PyCitySchoolsScript.ipynb
|
###Markdown
__PyCity Schools__ PyCity Schools AnalysisAfter analyzing the math and reading scores of 15 schools within the PyCity School District, some general conclusions can be drawn from the following summaries. Obserable Trends* School budgets and spending per student does not seem to have an impact on student's success, but school size does. By comparing the top and worst performing schools, the top schools spent an average of \\$606.40 per student and the worst schools actually spent more with an average of \\$646.60 per student. The school size of these schools though does apear to have an influence, the top schools have an average of 1,641 students and the worst schools have an average of 3,852 students.* The school type also appears to have an affect on the success of their students, but this may be mainly because of the school size trends of the Charter and Distrcit schools. The Charter schools have an average of 1,524 students and the District schools have an average of 3,852 students. Further information would be required to analyze whether Charter are actually better than District schools. Future Study* More information, i.e. other subject grades, classroom sizes, budget allocations, faculty demographics, and extracurricular activities would be interesting information to analyze and view trends between the Charter and Distrcit schools.
###Code
#import python libraries
import pandas as pd
from tabulate import tabulate
#specifying data
schooldata_file = "Resources/schools_complete.csv"
studentdata_file = "Resources/students_complete.csv"
#reading files into pandas
school_data = pd.read_csv(schooldata_file)
student_data = pd.read_csv(studentdata_file)
#combining data
pycity_data = pd.merge(student_data, school_data, how="left", on=["school_name","school_name"])
###Output
_____no_output_____
###Markdown
_District Summary_
###Code
#calculate total number of schools, numbers, and budget
school_count = len(pycity_data["School ID"].unique())
student_count = len(pycity_data["Student ID"].unique())
district_budget = school_data["budget"].sum()
#calculate average math and reading score
averagedistrict_math = pycity_data["math_score"].mean()
averagedistrict_read = pycity_data["reading_score"].mean()
#calculate percentage of students passing math scores, passing reading scores, and both
passingmath = (pycity_data["math_score"] >= 70).sum()/student_count
passingreading = (pycity_data["reading_score"] >= 70).sum()/student_count
passingoverall = ((pycity_data["math_score"] >= 70) & (pycity_data["reading_score"] >= 70)).sum() / student_count
#create district summary table
district_summary = pd.DataFrame([
{"Total Schools": school_count,
"Total Students": student_count,
"Total Budget": district_budget,
"Average Math Score": averagedistrict_math,
"Average Reading Score": averagedistrict_read,
"% Passing Math": passingmath,
"% Passing Reading": passingreading,
"% Overall Passing": passingoverall}])
#format district summary table
district_dict = {'Total Students': '{0:,.0f}', 'Total Budget':'${0:,.2f}',
'% Passing Math': '{:%}', '% Passing Reading': '{:%}', '% Overall Passing': '{:%}'}
district_summary.style.format(district_dict).set_properties(**{'text-align':'left'}).hide_index()
###Output
_____no_output_____
###Markdown
_School Summary_
###Code
#create dataframe for school summary analysis
school_summary = pycity_data.groupby('school_name').agg({
'type':['max'],
'Student ID':['count'],
'budget':['max'],
'math_score':['mean'],
'reading_score':['mean']})
#rename column titles and create new datasets
school_summary.columns = ["School Type", "Total Students", "Total School Budget", "Average Math Score", "Average Reading Score"]
#calculate the school budget per student column
school_summary["Per Student Budget"] = school_summary["Total School Budget"]/school_summary["Total Students"]
#pre-calculation for percentage of students passing math scores, passing reading scores, and both by school
passingmath_school = pycity_data.loc[pycity_data['math_score'] >= 70].groupby('school_name').count()['size']
passingread_school = pycity_data.loc[pycity_data['reading_score'] >= 70].groupby('school_name').count()['size']
passingoverall_school = pycity_data.loc[
(pycity_data['math_score'] >= 70)
&(pycity_data['reading_score'] >= 70)
].groupby('school_name').count()['size']
#calculate for percentage of students passing math scores, passing reading scores, and both by school
school_summary["% Passing Math"] = passingmath_school / school_summary["Total Students"]*100
school_summary["% Passing Reading"] = passingread_school / school_summary["Total Students"]*100
school_summary["% Overall Passing"] = passingoverall_school / school_summary["Total Students"]*100
#rearrange columns
school_summary = school_summary[['School Type', 'Total Students', 'Total School Budget','Per Student Budget',
'Average Math Score', 'Average Reading Score',
'% Passing Math', '% Passing Reading', '% Overall Passing']]
#rename index column
school_summary.index.name= "School Name"
#format school summary table
school_dict = {'Total Students': '{0:,.0f}', 'Total School Budget':'${0:,.2f}',
'Per Student Budget':'${0:,.2f}'}
school_summary.style.format(school_dict).set_properties(**{'text-align':'right'}).set_table_styles([
{'selector': 'th','props':[('text-align','left')]},
{'selector': '.col_heading','props':[('text-align','left')]},
{'selector': '.row_heading','props':[('text-align','left')]}])
#print(tabulate(school_summary,showindex=False, headers=school_summary.columns))
###Output
_____no_output_____
###Markdown
_Top Performing Schools (By % Overall Passing)_
###Code
#find and sort the top five performing schools by % overall passing
topschools = school_summary.sort_values(by='% Overall Passing', ascending=False).head(5)
#format top school summary table
topschool_dict = {'Total Students': '{0:,.0f}', 'Total School Budget':'${0:,.2f}',
'Per Student Budget':'${0:,.2f}'}
topschools.style.format(topschool_dict).set_properties(**{'text-align':'right'}).set_table_styles([
{'selector': 'th','props':[('text-align','left')]},
{'selector': '.col_heading','props':[('text-align','left')]},
{'selector': '.row_heading','props':[('text-align','left')]}])
###Output
_____no_output_____
###Markdown
_Bottom Performing Schools (By % Overall Passing)_
###Code
#find and sort the top five worst performing schools by % overall passing
worstschools = school_summary.sort_values(by=['% Overall Passing']).head(5)
#format worst school summary table
worstschool_dict = {'Total Students': '{0:,.0f}', 'Total School Budget':'${0:,.2f}',
'Per Student Budget':'${0:,.2f}'}
worstschools.style.format(worstschool_dict).set_properties(**{'text-align':'right'}).set_table_styles([
{'selector': 'th','props':[('text-align','left')]},
{'selector': '.col_heading','props':[('text-align','left')]},
{'selector': '.row_heading','props':[('text-align','left')]}])
###Output
_____no_output_____
###Markdown
_Math Scores by Grade_
###Code
#filter data by school for an average math score by grade and school
mathgrade_nine = pycity_data[pycity_data['grade'] == "9th"].groupby('school_name').mean()['math_score']
mathgrade_ten = pycity_data[pycity_data['grade'] == "10th"].groupby('school_name').mean()['math_score']
mathgrade_eleven = pycity_data[pycity_data['grade'] == "11th"].groupby('school_name').mean()['math_score']
mathgrade_twelve = pycity_data[pycity_data['grade'] == "12th"].groupby('school_name').mean()['math_score']
mathgrade_total = pycity_data.groupby('school_name').mean()['math_score']
#create math scores table with columns titles and place filtered data
mathscores_grade = pd.DataFrame({"9th":mathgrade_nine,"10th":mathgrade_ten,"11th":mathgrade_eleven,"12th":mathgrade_twelve,"Overall":mathgrade_total})
#rename index column and print math scores by grade table
mathscores_grade.index.name= "School Name"
mathscores_grade.style.set_properties(**{'text-align':'left'}).set_table_styles([
{'selector': 'th','props':[('text-align','left')]},
{'selector': '.col_heading','props':[('text-align','left')]},
{'selector': '.row_heading','props':[('text-align','left')]}])
###Output
_____no_output_____
###Markdown
_Reading Scores by Grade_
###Code
#filter data by school for an average reading score by grade and school
readgrade_nine = pycity_data[pycity_data['grade'] == "9th"].groupby('school_name').mean()['reading_score']
readgrade_ten = pycity_data[pycity_data['grade'] == "10th"].groupby('school_name').mean()['reading_score']
readgrade_eleven = pycity_data[pycity_data['grade'] == "11th"].groupby('school_name').mean()['reading_score']
readgrade_twelve = pycity_data[pycity_data['grade'] == "12th"].groupby('school_name').mean()['reading_score']
readgrade_total = pycity_data.groupby('school_name').mean()['reading_score']
#create reading scores table with columns titles and place filtered data
readscores_grade = pd.DataFrame({"9th":readgrade_nine,"10th":readgrade_ten,"11th":readgrade_eleven,"12th":readgrade_twelve,"Overall":readgrade_total})
#rename index column and print reading scores by grade table
readscores_grade.index.name= "School Name"
readscores_grade.style.set_properties(**{'text-align':'left'}).set_table_styles([
{'selector': 'th','props':[('text-align','left')]},
{'selector': '.col_heading','props':[('text-align','left')]},
{'selector': '.row_heading','props':[('text-align','left')]}])
###Output
_____no_output_____
###Markdown
_Scores by School Spending_
###Code
#create custom ranges and labels for groupby based on school spending per student
studentspending = [0,584,629,644,675]
spendinglabels = ["<$584","$585-629","$630-644","$645-675"]
#pull per student budget data from summary table and place into ranges
schoolscores = school_summary["Spending Ranges (Per Student)"]=pd.cut(school_summary["Per Student Budget"], studentspending,labels=spendinglabels)
#calculate and place average math and reading scores, percentage of students passing math and reading, and percentage of students passing both math and reading
schoolscores_spending = school_summary.groupby("Spending Ranges (Per Student)").mean()
#configure table with relevant information
del schoolscores_spending['Total Students']
del schoolscores_spending['Total School Budget']
del schoolscores_spending['Per Student Budget']
#format scores by school spending summary table
schoolscorespending_dict = {'Average Math Score':'{0:,.2f}',
'Average Reading Score':'{0:,.2f}',
'% Passing Math':'{0:,.2f}',
'% Passing Reading':'{0:,.2f}',
'% Overall Passing':'{0:,.2f}'}
schoolscores_spending.style.format(schoolscorespending_dict).set_properties(**{'text-align':'right'}).set_table_styles([
{'selector': 'th','props':[('text-align','left')]},
{'selector': '.col_heading','props':[('text-align','left')]},
{'selector': '.row_heading','props':[('text-align','left')]}])
###Output
_____no_output_____
###Markdown
_Scores by School Size_
###Code
#create custom ranges and labels for groupby based on school size
schoolstudentsize = [0,1000,2000,5000]
sizelabels = ["Small (<1000)","Medium (1000-2000)","Large (2000-5000)"]
#pull school size data from summary table and place into ranges
schoolsize = school_summary["School Size"]=pd.cut(school_summary["Total Students"], schoolstudentsize,labels=sizelabels)
#calculate and place average math and reading scores, percentage of students passing math and reading, and percentage of students passing both math and reading
schoolscores_size = school_summary.groupby("School Size").mean()
#configure and print table with relevant information
del schoolscores_size['Total Students']
del schoolscores_size['Total School Budget']
del schoolscores_size['Per Student Budget']
schoolscores_size.style.set_properties(**{'text-align':'right'}).set_table_styles([
{'selector': 'th','props':[('text-align','left')]},
{'selector': '.col_heading','props':[('text-align','left')]},
{'selector': '.row_heading','props':[('text-align','left')]}])
###Output
_____no_output_____
###Markdown
_Scores by School Type_
###Code
#calculate and place data based on school type
schoolscores_type = school_summary.groupby("School Type").mean()
#configure and print table with relevant information
del schoolscores_type['Total Students']
del schoolscores_type['Total School Budget']
del schoolscores_type['Per Student Budget']
schoolscores_type.style.set_properties(**{'text-align':'right'}).set_table_styles([
{'selector': 'th','props':[('text-align','left')]},
{'selector': '.col_heading','props':[('text-align','left')]},
{'selector': '.row_heading','props':[('text-align','left')]}])
###Output
_____no_output_____
|
weather-trends.ipynb
|
###Markdown
You do have not to import any third-party modules like NumPy, pandas, and others. Just import `WeatherTrends` object from my module which is named as web. By using this class, you can make plots of not one, not two but any cities which are defined in the city list CSV file! For using this module, you have to import my tool like this:
###Code
from web import WeatherTrends
###Output
_____no_output_____
###Markdown
Create object with any name (e.g tool):
###Code
# note: this object does not take any arguments for its initalization
tool = WeatherTrends()
###Output
_____no_output_____
###Markdown
For making plots, you should call `make_plot` method and give any city name as a function argument:
###Code
# note: city_name is must be as string
tool.make_plot('Moscow')
###Output
_____no_output_____
|
vlsi/adder.ipynb
|
###Markdown
DescriptionThis is written by Zhiyang Ong to obtain results for 1-bit, 4-bit, and 8-bit addition of unsigned numbers. The MIT License (MIT)Copyright (c) Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.Email address: echo "cukj -wb- 23wU4X5M589 TROJANS cqkH wiuz2y 0f Mw Stanford" | awk '{ sub("23wU4X5M589","F.d_c_b. ") sub("Stanford","d0mA1n"); print $5, $2, $8; for (i=1; i<=1; i++) print "6\b"; print $9, $7, $6 }' | sed y/kqcbuHwM62z/gnotrzadqmC/ | tr 'q' ' ' | tr -d [:cntrl:] | tr -d 'ir' | tr y "\n" Che cosa significa?
###Code
"""
Email address: echo "cukj -wb- 23wU4X5M589 TROJANS cqkH wiuz2y 0f Mw Stanford" | awk '{ sub("23wU4X5M589","F.d_c_b. ") sub("Stanford","d0mA1n"); print $5, $2, $8; for (i=1; i<=1; i++) print "6\b"; print $9, $7, $6 }' | sed y/kqcbuHwM62z/gnotrzadqmC/ | tr 'q' ' ' | tr -d [:cntrl:] | tr -d 'ir' | tr y "\n" Che cosa significa?
"""
def full_adder(a,b, cin):
"""
References:
https://en.wikipedia.org/wiki/Adder_(electronics)#Full_adder
https://www.cs.uic.edu/~i266/hwk6/42.pdf
https://www.ece.uvic.ca/~fayez/courses/ceng465/lab_465/project1/adders.pdf
https://www.elprocus.com/half-adder-and-full-adder/
http://hyperphysics.phy-astr.gsu.edu/hbase/Electronic/fulladd.html
http://www.theorycircuit.com/full-adder-circuit-diagram/
https://www.geeksforgeeks.org/full-adder-digital-electronics/
https://www.geeksforgeeks.org/full-adder-digital-electronics/
\cite[\S11.2.1, pp. 430-434]{Weste2011}
"""
#print("= Executing full adder.")
# sum = a xor b xor cin
sum = a ^ b ^ cin
# cout = (a & b) | (cin & (a ^ b))
cout = (a & b) | (cin & (a ^ b))
return sum, cout
def test_full_adder():
print("= Test full adder.")
a, b, c = 0, 0, 0
(s, co) = full_adder(a,b,c)
print("a:",a," b:", b," c:", c," s:", s," co:", co)
a, b, c = 0, 0, 1
(s, co) = full_adder(a,b,c)
print("a:",a," b:", b," c:", c," s:", s," co:", co)
a, b, c = 0, 1, 0
(s, co) = full_adder(a,b,c)
print("a:",a," b:", b," c:", c," s:", s," co:", co)
a, b, c = 0, 1, 1
(s, co) = full_adder(a,b,c)
print("a:",a," b:", b," c:", c," s:", s," co:", co)
a, b, c = 1, 0, 0
(s, co) = full_adder(a,b,c)
print("a:",a," b:", b," c:", c," s:", s," co:", co)
a, b, c = 1, 0, 1
(s, co) = full_adder(a,b,c)
print("a:",a," b:", b," c:", c," s:", s," co:", co)
a, b, c = 1, 1, 0
(s, co) = full_adder(a,b,c)
print("a:",a," b:", b," c:", c," s:", s," co:", co)
a, b, c = 1, 1, 1
(s, co) = full_adder(a,b,c)
print("a:",a," b:", b," c:", c," s:", s," co:", co)
def four_bit_adder(a_bitvec,b_bitvec,cin):
"""
References:
\cite[\S11.2.1, pp. 434-441; Figure 11.14, pp. 439]{Weste2011}
"""
#print("= Executing 4-bit adder.")
# 1st full adder component.
(s1,co1) = full_adder(a_bitvec[0], b_bitvec[0], cin)
# 2nd full adder component.
(s2,co2) = full_adder(a_bitvec[1], b_bitvec[1], co1)
# 3rd full adder component.
(s3,co3) = full_adder(a_bitvec[2], b_bitvec[2], co2)
# 4th full adder component.
(s4,co4) = full_adder(a_bitvec[3], b_bitvec[3], co3)
# Concatenate the sum output signals/bits into a tuple.
sum_bitvec = (s1,s2,s3,s4)
# Concatenate the "carry out" output signals/bits into a tuple.
cout_bitvec = (co1, co2, co3, co4)
return sum_bitvec, cout_bitvec
def get_p_and_g(a,b):
g = a & b
p = a ^ b
return p, g
def get_cout_g_grp(g, p, cin):
# cout = g_grp
g_grp = g | (p & cin)
return g_grp
def four_bit_adder_using_pg(a_bitvec,b_bitvec,cin):
"""
WARNING!!!
This buggy. Do NOT use it.
References:
\cite[\S11.2.2, pp. 434-441/458]{Weste2011}
"""
# Get P and G for bit 1.
(p1, g1) = get_p_and_g(a_bitvec[0],b_bitvec[0])
g0 = cin
# Get sum and carry out for bit 1.
s1 = p1 ^ g0
g_grp0 = get_cout_g_grp(p1, g1, g0)
co1 = g_grp0
# - - - - - - - - - - - - - - - - - - - - - - -
# Get P and G for bit 2.
(p2, g2) = get_p_and_g(a_bitvec[1],b_bitvec[1])
# Get sum and carry out for bit 2.
s2 = p2 ^ co1
g_grp1 = get_cout_g_grp(p2, g2, co1)
co2 = g_grp1
# - - - - - - - - - - - - - - - - - - - - - - -
# Get P and G for bit 3.
(p3, g3) = get_p_and_g(a_bitvec[2],b_bitvec[2])
# Get sum and carry out for bit 3.
s3 = p3 ^ co2
g_grp2 = get_cout_g_grp(p3, g3, co2)
co3 = g_grp2
# - - - - - - - - - - - - - - - - - - - - - - -
# Get P and G for bit 4.
(p4, g4) = get_p_and_g(a_bitvec[3],b_bitvec[3])
# Get sum and carry out for bit 4.
s4 = p4 ^ co3
g_grp3 = get_cout_g_grp(p4, g4, co3)
co4 = g_grp3
# - - - - - - - - - - - - - - - - - - - - - - -
# Concatenate the sum output signals/bits into a tuple.
sum_bitvec = (s1,s2,s3,s4)
# Concatenate the "carry out" output signals/bits into a tuple.
cout_bitvec = (co1, co2, co3, co4)
return sum_bitvec, cout_bitvec
def test_4_bit_adder():
print("= Test 4-bit adder.")
"""
1111 + 1111 + 0 => 11110 (15 + 15 = 30), no carry in. F + F + 0 => 1110/E + 1
1010 + 1010 + 1 => 010100 (10 + 10 = 20), carry in. A + A + 1 => 0101/5 + 1
0101 + 0101 + 1 => 01010 (5 + 5 = 10), carry in. 5 + 5 + 1 => 1011/B + 0
Reference:
http://dev.code.ultimater.net/electronics/4-bit-full-adder/
https://www.calculator.net/binary-calculator.html
"""
print("= Test set of input vectors #1. 1110/E + 1")
a_bv = (1, 1, 1, 1)
b_bv = (1, 1, 1, 1)
c = 0
(s_bv, co_bv) = four_bit_adder(a_bv,b_bv,c)
print("a_bv:",a_bv," b:", b_bv," c:", c," s_bv:", s_bv," co_bv:", co_bv)
print("= Test set of input vectors #2. 0101/5 + 1")
a_bv = (0, 1, 0, 1)
b_bv = (0, 1, 0, 1)
c = 1
(s_bv, co_bv) = four_bit_adder(a_bv,b_bv,c)
print("a_bv:",a_bv," b:", b_bv," c:", c," s_bv:", s_bv," co_bv:", co_bv)
print("= Test set of input vectors #3. 1011/B + 0")
a_bv = (1, 0, 1, 0)
b_bv = (1, 0, 1, 0)
c = 1
(s_bv, co_bv) = four_bit_adder(a_bv,b_bv,c)
print("a_bv:",a_bv," b:", b_bv," c:", c," s_bv:", s_bv," co_bv:", co_bv)
def test_4_bit_adder_all_designs():
print("= Test 4-bit adder.")
"""
1111 + 1111 + 0 => 11110 (15 + 15 = 30), no carry in. F + F + 0 => 1110/E + 1
1010 + 1010 + 1 => 010100 (10 + 10 = 20), carry in. A + A + 1 => 0101/5 + 1
0101 + 0101 + 1 => 01010 (5 + 5 = 10), carry in. 5 + 5 + 1 => 1011/B + 0
Reference:
http://dev.code.ultimater.net/electronics/4-bit-full-adder/
https://www.calculator.net/binary-calculator.html
"""
# print("= Test set of input vectors #1.")
a_bv = (1, 1, 1, 1)
b_bv = (1, 1, 1, 1)
c = 0
(s_bv, co_bv) = four_bit_adder(a_bv,b_bv,c)
print("a_bv:",a_bv," b:", b_bv," c:", c," s_bv:", s_bv," co_bv:", co_bv)
#"""
(s_bv, co_bv) = four_bit_adder_using_pg(a_bv,b_bv,c)
print("a_bv:",a_bv," b:", b_bv," c:", c," s_bv:", s_bv," co_bv:", co_bv)
#"""
# print("= Test set of input vectors #2.")
a_bv = (1, 0, 1, 0)
b_bv = (1, 0, 1, 0)
c = 1
(s_bv, co_bv) = four_bit_adder(a_bv,b_bv,c)
print("a_bv:",a_bv," b:", b_bv," c:", c," s_bv:", s_bv," co_bv:", co_bv)
#"""
(s_bv, co_bv) = four_bit_adder_using_pg(a_bv,b_bv,c)
print("a_bv:",a_bv," b:", b_bv," c:", c," s_bv:", s_bv," co_bv:", co_bv)
#"""
# print("= Test set of input vectors #3.")
a_bv = (0, 1, 0, 1)
b_bv = (0, 1, 0, 1)
c = 1
(s_bv, co_bv) = four_bit_adder(a_bv,b_bv,c)
print("a_bv:",a_bv," b:", b_bv," c:", c," s_bv:", s_bv," co_bv:", co_bv)
#"""
(s_bv, co_bv) = four_bit_adder_using_pg(a_bv,b_bv,c)
print("a_bv:",a_bv," b:", b_bv," c:", c," s_bv:", s_bv," co_bv:", co_bv)
#"""
def eight_bit_adder(a_bitvec,b_bitvec,cin):
"""
Test for the following test vectors/patterns.
0111 1110 + 1110 0111 + 0 (Carry In) = 7E + E7 + 0 = 0110 0101 (65) + 1
1111 1111 + 0000 0000 + 1 (Carry In) = FF + 00 + 1 = 0000 0000 (00) + 1
1010 1010 + 0101 0101 + 0 (Carry In) = AA + 55 + 0 = 1111 1111 (FF) + 0
1010 1010 + 0101 0101 + 1 (Carry In) = AA + 55 + 1 = 0000 0000 (00) + 1
1100 1100 + 0011 0011 + 0 (Carry In) = CC + 33 + 0 = 1111 1111 (FF) + 0
1100 1100 + 0011 0011 + 1 (Carry In) = CC + 33 + 1 = 0000 0000 (00) + 1
References:
\cite[\S11.2.1, pp. 434-441; Figure 11.14, pp. 439]{Weste2011}
http://dev.code.ultimater.net/electronics/8-bit-full-adder-and-subtractor/
"""
#print("= Executing 8-bit adder.")
# 1st full adder component.
(s1,co1) = full_adder(a_bitvec[0], b_bitvec[0], cin)
# 2nd full adder component.
(s2,co2) = full_adder(a_bitvec[1], b_bitvec[1], co1)
# 3rd full adder component.
(s3,co3) = full_adder(a_bitvec[2], b_bitvec[2], co2)
# 4th full adder component.
(s4,co4) = full_adder(a_bitvec[3], b_bitvec[3], co3)
# 5th full adder component.
(s5,co5) = full_adder(a_bitvec[4], b_bitvec[4], co4)
# 6th full adder component.
(s6,co6) = full_adder(a_bitvec[5], b_bitvec[5], co5)
# 7th full adder component.
(s7,co7) = full_adder(a_bitvec[6], b_bitvec[6], co6)
# 8th full adder component.
(s8,co8) = full_adder(a_bitvec[7], b_bitvec[7], co7)
# Concatenate the sum output signals/bits into a tuple.
sum_bitvec = (s1,s2,s3,s4,s5,s6,s7,s8)
# Concatenate the "carry out" output signals/bits into a tuple.
cout_bitvec = (co1, co2, co3, co4, co5, co6, co7, co8)
return sum_bitvec, cout_bitvec
def test_8_bit_adder():
print("= Test 8-bit adder.")
"""
Test for the following test vectors/patterns.
0111 1110 + 1110 0111 + 0 (Carry In) = 7E + E7 + 0 = 0110 0101 (65) + 1
1111 1111 + 0000 0000 + 1 (Carry In) = FF + 00 + 1 = 0000 0000 (00) + 1
1010 1010 + 0101 0101 + 0 (Carry In) = AA + 55 + 0 = 1111 1111 (FF) + 0
1010 1010 + 0101 0101 + 1 (Carry In) = AA + 55 + 1 = 0000 0000 (00) + 1
1100 1100 + 0011 0011 + 0 (Carry In) = CC + 33 + 0 = 1111 1111 (FF) + 0
1100 1100 + 0011 0011 + 1 (Carry In) = CC + 33 + 1 = 0000 0000 (00) + 1
References:
\cite[\S11.2.1, pp. 434-441; Figure 11.14, pp. 439]{Weste2011}
http://dev.code.ultimater.net/electronics/8-bit-full-adder-and-subtractor/
"""
print("= Test set of input vectors #1. 0110 0101 (65) + 1")
a_bv = (0, 1, 1, 1, 1, 1, 1, 0)
b_bv = (1, 1, 1, 0, 0, 1, 1, 1)
c = 0
(s_bv, co_bv) = eight_bit_adder(a_bv,b_bv,c)
print("a_bv:",a_bv," b:", b_bv," c:", c," s_bv:", s_bv," co_bv:", co_bv)
print("= Test set of input vectors #2. 0000 0000 (00) + 1")
a_bv = (1, 1, 1, 1, 1, 1, 1, 1)
b_bv = (0, 0, 0, 0, 0, 0, 0, 0)
c = 1
(s_bv, co_bv) = eight_bit_adder(a_bv,b_bv,c)
print("a_bv:",a_bv," b:", b_bv," c:", c," s_bv:", s_bv," co_bv:", co_bv)
print("= Test set of input vectors #3. 1111 1111 (FF) + 0")
a_bv = (0, 1, 0, 1, 0, 1, 0, 1)
b_bv = (1, 0, 1, 0, 1, 0, 1, 0)
c = 0
(s_bv, co_bv) = eight_bit_adder(a_bv,b_bv,c)
print("a_bv:",a_bv," b:", b_bv," c:", c," s_bv:", s_bv," co_bv:", co_bv)
print("= Test set of input vectors #4. 0000 0000 (00) + 1")
a_bv = (0, 1, 0, 1, 0, 1, 0, 1)
b_bv = (1, 0, 1, 0, 1, 0, 1, 0)
c = 1
(s_bv, co_bv) = eight_bit_adder(a_bv,b_bv,c)
print("a_bv:",a_bv," b:", b_bv," c:", c," s_bv:", s_bv," co_bv:", co_bv)
print("= Test set of input vectors #5. 1111 1111 (FF) + 0")
a_bv = (0, 0, 1, 1, 0, 0, 1, 1)
b_bv = (1, 1, 0, 0, 1, 1, 0, 0)
c = 0
(s_bv, co_bv) = eight_bit_adder(a_bv,b_bv,c)
print("a_bv:",a_bv," b:", b_bv," c:", c," s_bv:", s_bv," co_bv:", co_bv)
print("= Test set of input vectors #6. 0000 0000 (00) + 1")
a_bv = (0, 0, 1, 1, 0, 0, 1, 1)
b_bv = (1, 1, 0, 0, 1, 1, 0, 0)
c = 1
(s_bv, co_bv) = eight_bit_adder(a_bv,b_bv,c)
print("a_bv:",a_bv," b:", b_bv," c:", c," s_bv:", s_bv," co_bv:", co_bv)
def d_flip_flop(d, clk,q=None):
# Temporary incorrect assignment for q_bar.
q_bar = None
if q is None:
q_bar = None
"""
References:
https://www.electronics-tutorials.ws/sequential/seq_4.html
https://circuitdigest.com/electronic-circuits/d-flip-flops
https://electronicsforu.com/resources/learn-electronics/flip-flop-rs-jk-t-d
http://electronics-course.com/d-flip-flop
https://ecetutorials.com/digital-electronics/d-flip-flop-circuit-operation-and-truth-table/
https://en.wikipedia.org/wiki/Flip-flop_(electronics)
http://www.rfcafe.com/references/electrical/flip-flop-table.htm
https://ece.uwaterloo.ca/~cgebotys/NEW/223-8notes.htm
https://www.sciencedirect.com/topics/computer-science/flip-flops
https://www.cypress.com/file/133031/download
http://www.cburch.com/logisim/docs/2.3.0/libs/mem/flipflops.html
"""
if (1 == clk):
q = d
if (1 == q):
q_bar = 0
else:
q_bar = 1
return q, q_bar
elif ((clk is not None) and (q is not None) and (q_bar is not None)):
return None, None
else:
if (1 == q):
q_bar = 0
else:
q_bar = 1
return q, q_bar
def test_d_flip_flop():
print("= Test D flip-flop.")
print("= Test set of input vectors #1.")
clk_ip = 0
d_ip = 0
(q_out, q_bar_out) = d_flip_flop(d_ip,clk_ip)
print("d_ip:",d_ip," clk_ip:", clk_ip," q_out:", q_out," q_bar_out:", q_bar_out)
print("= Test set of input vectors #2.")
clk_ip = 0
d_ip = 1
(q_out, q_bar_out) = d_flip_flop(d_ip,clk_ip)
print("d_ip:",d_ip," clk_ip:", clk_ip," q_out:", q_out," q_bar_out:", q_bar_out)
print("= Test set of input vectors #3.")
clk_ip = 1
d_ip = 0
(q_out, q_bar_out) = d_flip_flop(d_ip,clk_ip)
print("d_ip:",d_ip," clk_ip:", clk_ip," q_out:", q_out," q_bar_out:", q_bar_out)
print("= Test set of input vectors #4.")
clk_ip = 1
d_ip = 1
(q_out, q_bar_out) = d_flip_flop(d_ip,clk_ip,q_out)
print("d_ip:",d_ip," clk_ip:", clk_ip," q_out:", q_out," q_bar_out:", q_bar_out)
print("= Test set of input vectors #5.")
clk_ip = 0
d_ip = 1
(q_out, q_bar_out) = d_flip_flop(d_ip,clk_ip,q_out)
print("d_ip:",d_ip," clk_ip:", clk_ip," q_out:", q_out," q_bar_out:", q_bar_out)
print("= Test set of input vectors #6.")
clk_ip = 1
d_ip = 0
(q_out, q_bar_out) = d_flip_flop(d_ip,clk_ip,q_out)
print("d_ip:",d_ip," clk_ip:", clk_ip," q_out:", q_out," q_bar_out:", q_bar_out)
print("= Test set of input vectors #7.")
clk_ip = 0
d_ip = 0
(q_out, q_bar_out) = d_flip_flop(d_ip,clk_ip,q_out)
print("d_ip:",d_ip," clk_ip:", clk_ip," q_out:", q_out," q_bar_out:", q_bar_out)
def three_bit_register_flip_flop(d2, d1, d0, clk, q2=None, q1=None, q0=None):
# Temporary incorrect assignment for q[i]_bars.
q2_bar = None
if q2 is None:
q2_bar = None
q1_bar = None
if q1 is None:
q1_bar = None
q0_bar = None
if q0 is None:
q0_bar = None
if (1 == clk):
q2 = d2
q1 = d1
q0 = d0
if (1 == q2):
q2_bar = 0
else:
q2_bar = 1
if (1 == q1):
q1_bar = 0
else:
q1_bar = 1
if (1 == q0):
q0_bar = 0
else:
q0_bar = 1
elif ((clk is not None) and (q2 is not None) and (q2_bar is not None)):
q2, q2_bar = None, None
elif ((clk is not None) and (q1 is not None) and (q1_bar is not None)):
q1, q1_bar = None, None
elif ((clk is not None) and (q0 is not None) and (q0_bar is not None)):
q0, q0_bar = None, None
else:
if (1 == q2):
q2_bar = 0
else:
q2_bar = 1
if (1 == q1):
q1_bar = 0
else:
q1_bar = 1
if (1 == q0):
q0_bar = 0
else:
q0_bar = 1
return q2, q2_bar, q1, q1_bar, q0, q0_bar
def test_three_bit_register_flip_flop():
print("= Test 3-bit D flip-flop.")
print("= Test set of input vectors #1.")
clk_ip = 0
d2_ip, d1_ip, d0_ip = 0, 0, 0
(q2_out, q2_bar_out, q1_out, q1_bar_out, q0_out, q0_bar_out) = three_bit_register_flip_flop(d2_ip, d1_ip, d0_ip,clk_ip)
print("d2_ip:",d2_ip,"d1_ip:",d1_ip,"d0_ip:",d0_ip," clk_ip:", clk_ip," q2_out:", q2_out," q2_bar_out:", q2_bar_out, " q1_out:", q1_out," q1_bar_out:", q1_bar_out, " q0_out:", q0_out," q0_bar_out:", q0_bar_out)
print("= Test set of input vectors #1.")
clk_ip = 0
d2_ip, d1_ip, d0_ip = 0, 0, 0
(q2_out, q2_bar_out, q1_out, q1_bar_out, q0_out, q0_bar_out) = three_bit_register_flip_flop(d2_ip, d1_ip, d0_ip,clk_ip)
print("d2_ip:",d2_ip,"d1_ip:",d1_ip,"d0_ip:",d0_ip," clk_ip:", clk_ip," q2_out:", q2_out," q2_bar_out:", q2_bar_out, " q1_out:", q1_out," q1_bar_out:", q1_bar_out, " q0_out:", q0_out," q0_bar_out:", q0_bar_out)
if __name__ == "__main__":
test_full_adder()
"""
Example of how to use tuples to represent bit-vectors.
a_bitvec = (6, 12, 9, 14, 53)
print("a_bitvec[2]:",a_bitvec[2],"; a_bitvec[4]",a_bitvec[4],"=")
"""
print("")
test_4_bit_adder()
print("")
test_8_bit_adder()
print("")
test_d_flip_flop()
print("")
# sandbox/python/vlsi/
test_three_bit_register_flip_flop()
###Output
_____no_output_____
|
apphub/image_classification/lenet_cifar10_mixup/lenet_cifar10_mixup.ipynb
|
###Markdown
CIFAR10 Image Classification Using LeNet In this tutorial, we are going to walk through the logic in `lenet_cifar10_mixup.py` shown below and provide step-by-step instructions.
###Code
!cat lenet_cifar10_mixup.py
###Output
# Copyright 2019 The FastEstimator Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import tempfile
import tensorflow as tf
from tensorflow.python.keras.losses import SparseCategoricalCrossentropy as KerasCrossentropy
import fastestimator as fe
from fastestimator.architecture import LeNet
from fastestimator.op.tensorop import MixUpLoss, SparseCategoricalCrossentropy, ModelOp, MixUpBatch, Minmax
from fastestimator import FEModel
from fastestimator.schedule import Scheduler
from fastestimator.trace import Accuracy, ConfusionMatrix, ModelSaver
def get_estimator(epochs=10, batch_size=32, alpha=1.0, warmup=0, model_dir=tempfile.mkdtemp()):
(x_train, y_train), (x_eval, y_eval) = tf.keras.datasets.cifar10.load_data()
data = {"train": {"x": x_train, "y": y_train}, "eval": {"x": x_eval, "y": y_eval}}
num_classes = 10
pipeline = fe.Pipeline(batch_size=batch_size, data=data, ops=Minmax(inputs="x", outputs="x"))
model = FEModel(model_def=lambda: LeNet(input_shape=x_train.shape[1:], classes=num_classes),
model_name="LeNet",
optimizer="adam")
mixup_map = {warmup: MixUpBatch(inputs="x", outputs=["x", "lambda"], alpha=alpha, mode="train")}
mixup_loss = {
0: SparseCategoricalCrossentropy(y_true="y", y_pred="y_pred", mode="train"),
warmup: MixUpLoss(KerasCrossentropy(), lam="lambda", y_true="y", y_pred="y_pred", mode="train")
}
network = fe.Network(ops=[
Scheduler(mixup_map),
ModelOp(inputs="x", model=model, outputs="y_pred"),
Scheduler(mixup_loss),
SparseCategoricalCrossentropy(y_true="y", y_pred="y_pred", mode="eval")
])
traces = [
Accuracy(true_key="y", pred_key="y_pred"),
ConfusionMatrix(true_key="y", pred_key="y_pred", num_classes=num_classes),
ModelSaver(model_name="LeNet", save_dir=model_dir, save_best=True)
]
estimator = fe.Estimator(network=network, pipeline=pipeline, epochs=epochs, traces=traces)
return estimator
if __name__ == "__main__":
est = get_estimator()
est.fit()
###Markdown
Step 1: Prepare training and evaluation dataset, create FastEstimator `Pipeline` `Pipeline` can take both data in memory and data in disk. In this example, we are going to use data in memory by loading data with `tf.keras.datasets.cifar10`
###Code
import tensorflow as tf
(x_train, y_train), (x_eval, y_eval) = tf.keras.datasets.cifar10.load_data()
print("train image shape is {}".format(x_train.shape))
print("train label shape is {}".format(y_train.shape))
print("eval image shape is {}".format(x_eval.shape))
print("eval label shape is {}".format(y_eval.shape))
###Output
train image shape is (50000, 32, 32, 3)
train label shape is (50000, 1)
eval image shape is (10000, 32, 32, 3)
eval label shape is (10000, 1)
###Markdown
For in-memory data in `Pipeline`, the data format should be a nested dictionary like: {"mode1": {"feature1": numpy_array, "feature2": numpy_array, ...}, ...}. Each `mode` can be either `train` or `eval`, in our case, we have both `train` and `eval`. `feature` is the feature name, in our case, we have `x` and `y`.
###Code
data = {"train": {"x": x_train, "y": y_train}, "eval": {"x": x_eval, "y": y_eval}}
###Output
_____no_output_____
###Markdown
Now we are ready to define `Pipeline`, we want to apply a `Minmax` online preprocessing to the image feature `x` for both training and evaluation:
###Code
import fastestimator as fe
from fastestimator.op.tensorop import Minmax
pipeline = fe.Pipeline(batch_size=50, data=data, ops=Minmax(inputs="x", outputs="x"))
###Output
_____no_output_____
###Markdown
Step 2: Prepare model, create FastEstimator `Network` First, we have to define the network architecture in `tf.keras.Model` or `tf.keras.Sequential`, for a popular architecture like LeNet, FastEstimator has it implemented already in [fastestimator.architecture.lenet](https://github.com/fastestimator/fastestimator/blob/master/fastestimator/architecture/lenet.py). After defining the architecture, users are expected to feed the architecture definition and its associated model name, optimizer and loss name (default to be 'loss') to `FEModel`.
###Code
from fastestimator.architecture import LeNet
from fastestimator import FEModel
model = FEModel(model_def=lambda: LeNet(input_shape=x_train.shape[1:], classes=10), model_name="LeNet", optimizer="adam")
###Output
_____no_output_____
###Markdown
We can now define a simple `Network`: given with a batch data with key `x` and `y`, we have to work our way to `loss` with series of operators. `ModelOp` is an operator that contains a model.
###Code
from fastestimator.op.tensorop import ModelOp, SparseCategoricalCrossentropy
simple_network = fe.Network(ops=[ModelOp(inputs="x", model=model, outputs="y_pred"),
SparseCategoricalCrossentropy(y_pred="y_pred", y_true="y", outputs="loss")])
###Output
_____no_output_____
###Markdown
One advantage of `FastEstimator`, though, is that it is easy to construct much more complicated graphs. In this example, we want to conduct training by [mixing up input images](https://arxiv.org/abs/1710.09412), since this has been shown to make neural networks more robust against adversarial attacks, as well as helping to prevent over-fitting. To achieve this in `FastEstimator`, we start by randomly pairing and linearly combining inputs, then feeding the mixed images to the `ModelOp` before computing the loss. Note that mixup is only performed during training (not evaluation), and so the mode on the mix-related operations is set to 'train'. We use schedulers to enable mixup only after the first epoch, since it takes quite a while to converge otherwise.
###Code
from tensorflow.python.keras.losses import SparseCategoricalCrossentropy as KerasCrossentropy
from fastestimator.op.tensorop import MixUpBatch, MixUpLoss
from fastestimator.schedule import Scheduler
pipeline2 = fe.Pipeline(batch_size=50, data=data, ops=Minmax(inputs="x", outputs="x"))
model2 = FEModel(model_def=lambda: LeNet(input_shape=x_train.shape[1:], classes=10), model_name="LeNet", optimizer="adam")
warmup = 1
mixup_network = fe.Network(ops=[
Scheduler({warmup: MixUpBatch(inputs="x", outputs=["x", "lambda"], alpha=1, mode="train")}),
ModelOp(inputs="x", model=model2, outputs="y_pred"),
Scheduler({0: SparseCategoricalCrossentropy(y_true="y", y_pred="y_pred", mode="train"),
warmup: MixUpLoss(KerasCrossentropy(), lam="lambda", y_true="y", y_pred="y_pred", mode="train")}),
SparseCategoricalCrossentropy(y_true="y", y_pred="y_pred", mode="eval")
])
###Output
_____no_output_____
###Markdown
Step 3: Configure training, create `Estimator` During the training loop, we want to: 1) measure accuracy for data data 2) save the model with lowest valdiation loss. The `Trace` class is used for anything related to the training loop, and we will need to import the `Accuracy` and `ModelSaver` traces.
###Code
import tempfile
import os
from fastestimator.trace import Accuracy, ModelSaver
simple_traces = [Accuracy(true_key="y", pred_key="y_pred", output_name='acc')]
mixup_traces = [Accuracy(true_key="y", pred_key="y_pred", output_name='acc')]
###Output
_____no_output_____
###Markdown
Now we can define the `Estimator` and specify the training configuation. We will create estimators for both the simple and adversarial networks in order to compare their performances.
###Code
simple_estimator = fe.Estimator(network=simple_network, pipeline=pipeline, epochs=35, traces=simple_traces, log_steps=750)
mixup_estimator = fe.Estimator(network=mixup_network, pipeline=pipeline2, epochs=35, traces=mixup_traces, log_steps=750)
###Output
_____no_output_____
###Markdown
Step 4: Training We'll start by training the regular network (takes about 20 minutes on a 2015 MacBookPro CPU - 2.5 GHz Intel Core i7). The network should attain a peak evaluation accuracy around 72%
###Code
simple_summary = simple_estimator.fit(summary="simple")
###Output
______ __ ______ __ _ __
/ ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____
/ /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/
/ __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / /
/_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/
FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved.
FastEstimator-Start: step: 0; total_train_steps: 35000; LeNet_lr: 0.001;
FastEstimator-Train: step: 0; loss: 2.2989788;
FastEstimator-Train: step: 750; loss: 1.3157376; examples/sec: 1518.4; progress: 2.1%;
FastEstimator-Eval: step: 1000; epoch: 0; loss: 1.2433634; min_loss: 1.2433634; since_best_loss: 0; acc: 0.5502;
FastEstimator-Train: step: 1500; loss: 1.4256195; examples/sec: 1416.8; progress: 4.3%;
FastEstimator-Eval: step: 2000; epoch: 1; loss: 1.1029515; min_loss: 1.1029515; since_best_loss: 0; acc: 0.6091;
FastEstimator-Train: step: 2250; loss: 1.1579484; examples/sec: 1346.8; progress: 6.4%;
FastEstimator-Eval: step: 3000; epoch: 2; loss: 1.0051966; min_loss: 1.0051966; since_best_loss: 0; acc: 0.6488;
FastEstimator-Train: step: 3000; loss: 1.2000177; examples/sec: 1330.8; progress: 8.6%;
FastEstimator-Train: step: 3750; loss: 0.9446747; examples/sec: 1303.5; progress: 10.7%;
FastEstimator-Eval: step: 4000; epoch: 3; loss: 0.9416299; min_loss: 0.94162995; since_best_loss: 0; acc: 0.6711;
FastEstimator-Train: step: 4500; loss: 0.9623857; examples/sec: 1315.8; progress: 12.9%;
FastEstimator-Eval: step: 5000; epoch: 4; loss: 0.9077615; min_loss: 0.9077615; since_best_loss: 0; acc: 0.6852;
FastEstimator-Train: step: 5250; loss: 0.9986706; examples/sec: 1373.0; progress: 15.0%;
FastEstimator-Eval: step: 6000; epoch: 5; loss: 0.9207905; min_loss: 0.9077615; since_best_loss: 1; acc: 0.6865;
FastEstimator-Train: step: 6000; loss: 0.7511868; examples/sec: 1351.1; progress: 17.1%;
FastEstimator-Train: step: 6750; loss: 0.8952303; examples/sec: 1158.7; progress: 19.3%;
FastEstimator-Eval: step: 7000; epoch: 6; loss: 0.9191301; min_loss: 0.9077615; since_best_loss: 2; acc: 0.6849;
FastEstimator-Train: step: 7500; loss: 0.7468068; examples/sec: 1207.7; progress: 21.4%;
FastEstimator-Eval: step: 8000; epoch: 7; loss: 0.8561178; min_loss: 0.85611784; since_best_loss: 0; acc: 0.7034;
FastEstimator-Train: step: 8250; loss: 0.6032203; examples/sec: 1305.5; progress: 23.6%;
FastEstimator-Eval: step: 9000; epoch: 8; loss: 0.8458996; min_loss: 0.84589964; since_best_loss: 0; acc: 0.71;
FastEstimator-Train: step: 9000; loss: 0.5752448; examples/sec: 1234.1; progress: 25.7%;
FastEstimator-Train: step: 9750; loss: 0.807608; examples/sec: 1323.9; progress: 27.9%;
FastEstimator-Eval: step: 10000; epoch: 9; loss: 0.8584864; min_loss: 0.84589964; since_best_loss: 1; acc: 0.7079;
FastEstimator-Train: step: 10500; loss: 0.4007401; examples/sec: 1327.5; progress: 30.0%;
FastEstimator-Eval: step: 11000; epoch: 10; loss: 0.847151; min_loss: 0.84589964; since_best_loss: 2; acc: 0.7152;
FastEstimator-Train: step: 11250; loss: 0.594514; examples/sec: 1326.8; progress: 32.1%;
FastEstimator-Eval: step: 12000; epoch: 11; loss: 0.9184404; min_loss: 0.84589964; since_best_loss: 3; acc: 0.7099;
FastEstimator-Train: step: 12000; loss: 0.58203; examples/sec: 1368.5; progress: 34.3%;
FastEstimator-Train: step: 12750; loss: 0.4818403; examples/sec: 1335.5; progress: 36.4%;
FastEstimator-Eval: step: 13000; epoch: 12; loss: 0.846058; min_loss: 0.84589964; since_best_loss: 4; acc: 0.7259;
FastEstimator-Train: step: 13500; loss: 0.3530467; examples/sec: 1340.5; progress: 38.6%;
FastEstimator-Eval: step: 14000; epoch: 13; loss: 0.8999681; min_loss: 0.84589964; since_best_loss: 5; acc: 0.718;
FastEstimator-Train: step: 14250; loss: 0.5744307; examples/sec: 1432.3; progress: 40.7%;
FastEstimator-Eval: step: 15000; epoch: 14; loss: 0.9334542; min_loss: 0.84589964; since_best_loss: 6; acc: 0.7096;
FastEstimator-Train: step: 15000; loss: 0.3631234; examples/sec: 1416.6; progress: 42.9%;
FastEstimator-Train: step: 15750; loss: 0.3753418; examples/sec: 1326.6; progress: 45.0%;
FastEstimator-Eval: step: 16000; epoch: 15; loss: 0.9351049; min_loss: 0.84589964; since_best_loss: 7; acc: 0.7143;
FastEstimator-Train: step: 16500; loss: 0.3983784; examples/sec: 1386.3; progress: 47.1%;
FastEstimator-Eval: step: 17000; epoch: 16; loss: 0.9699541; min_loss: 0.84589964; since_best_loss: 8; acc: 0.7068;
FastEstimator-Train: step: 17250; loss: 0.8026924; examples/sec: 1409.8; progress: 49.3%;
FastEstimator-Eval: step: 18000; epoch: 17; loss: 1.0405009; min_loss: 0.84589964; since_best_loss: 9; acc: 0.7092;
FastEstimator-Train: step: 18000; loss: 0.2966534; examples/sec: 1392.0; progress: 51.4%;
FastEstimator-Train: step: 18750; loss: 0.3240081; examples/sec: 1380.8; progress: 53.6%;
FastEstimator-Eval: step: 19000; epoch: 18; loss: 1.0830307; min_loss: 0.84589964; since_best_loss: 10; acc: 0.6979;
FastEstimator-Train: step: 19500; loss: 0.2596785; examples/sec: 1272.7; progress: 55.7%;
FastEstimator-Eval: step: 20000; epoch: 19; loss: 1.1409403; min_loss: 0.84589964; since_best_loss: 11; acc: 0.6978;
FastEstimator-Train: step: 20250; loss: 0.2304578; examples/sec: 1327.9; progress: 57.9%;
FastEstimator-Eval: step: 21000; epoch: 20; loss: 1.1332737; min_loss: 0.84589964; since_best_loss: 12; acc: 0.6968;
FastEstimator-Train: step: 21000; loss: 0.3295046; examples/sec: 1328.5; progress: 60.0%;
FastEstimator-Train: step: 21750; loss: 0.1731121; examples/sec: 1409.9; progress: 62.1%;
FastEstimator-Eval: step: 22000; epoch: 21; loss: 1.1821457; min_loss: 0.84589964; since_best_loss: 13; acc: 0.707;
FastEstimator-Train: step: 22500; loss: 0.4336742; examples/sec: 1352.4; progress: 64.3%;
FastEstimator-Eval: step: 23000; epoch: 22; loss: 1.2123183; min_loss: 0.84589964; since_best_loss: 14; acc: 0.7007;
FastEstimator-Train: step: 23250; loss: 0.4531382; examples/sec: 1356.6; progress: 66.4%;
FastEstimator-Eval: step: 24000; epoch: 23; loss: 1.3681846; min_loss: 0.84589964; since_best_loss: 15; acc: 0.6902;
FastEstimator-Train: step: 24000; loss: 0.1203288; examples/sec: 1321.5; progress: 68.6%;
FastEstimator-Train: step: 24750; loss: 0.4227715; examples/sec: 1430.0; progress: 70.7%;
FastEstimator-Eval: step: 25000; epoch: 24; loss: 1.3090713; min_loss: 0.84589964; since_best_loss: 16; acc: 0.6991;
FastEstimator-Train: step: 25500; loss: 0.2697889; examples/sec: 1434.5; progress: 72.9%;
FastEstimator-Eval: step: 26000; epoch: 25; loss: 1.3569603; min_loss: 0.84589964; since_best_loss: 17; acc: 0.7042;
FastEstimator-Train: step: 26250; loss: 0.176522; examples/sec: 1391.3; progress: 75.0%;
FastEstimator-Eval: step: 27000; epoch: 26; loss: 1.4096411; min_loss: 0.84589964; since_best_loss: 18; acc: 0.6992;
FastEstimator-Train: step: 27000; loss: 0.169631; examples/sec: 1435.4; progress: 77.1%;
FastEstimator-Train: step: 27750; loss: 0.1861947; examples/sec: 1441.5; progress: 79.3%;
FastEstimator-Eval: step: 28000; epoch: 27; loss: 1.4584777; min_loss: 0.84589964; since_best_loss: 19; acc: 0.6939;
FastEstimator-Train: step: 28500; loss: 0.0766525; examples/sec: 1426.6; progress: 81.4%;
FastEstimator-Eval: step: 29000; epoch: 28; loss: 1.5798118; min_loss: 0.84589964; since_best_loss: 20; acc: 0.6951;
FastEstimator-Train: step: 29250; loss: 0.2569139; examples/sec: 1425.9; progress: 83.6%;
FastEstimator-Eval: step: 30000; epoch: 29; loss: 1.639139; min_loss: 0.84589964; since_best_loss: 21; acc: 0.6984;
FastEstimator-Train: step: 30000; loss: 0.1210424; examples/sec: 1383.9; progress: 85.7%;
FastEstimator-Train: step: 30750; loss: 0.1564893; examples/sec: 1478.5; progress: 87.9%;
FastEstimator-Eval: step: 31000; epoch: 30; loss: 1.6667398; min_loss: 0.84589964; since_best_loss: 22; acc: 0.6848;
FastEstimator-Train: step: 31500; loss: 0.0912744; examples/sec: 1505.9; progress: 90.0%;
FastEstimator-Eval: step: 32000; epoch: 31; loss: 1.7258862; min_loss: 0.84589964; since_best_loss: 23; acc: 0.6895;
FastEstimator-Train: step: 32250; loss: 0.0238111; examples/sec: 1539.0; progress: 92.1%;
FastEstimator-Eval: step: 33000; epoch: 32; loss: 1.747565; min_loss: 0.84589964; since_best_loss: 24; acc: 0.6897;
FastEstimator-Train: step: 33000; loss: 0.1130405; examples/sec: 1551.6; progress: 94.3%;
FastEstimator-Train: step: 33750; loss: 0.1497695; examples/sec: 1555.8; progress: 96.4%;
FastEstimator-Eval: step: 34000; epoch: 33; loss: 1.8326926; min_loss: 0.84589964; since_best_loss: 25; acc: 0.6937;
FastEstimator-Train: step: 34500; loss: 0.1434229; examples/sec: 1542.5; progress: 98.6%;
FastEstimator-Eval: step: 35000; epoch: 34; loss: 1.9228554; min_loss: 0.84589964; since_best_loss: 26; acc: 0.6976;
FastEstimator-Finish: step: 35000; total_time: 1341.29 sec; LeNet_lr: 0.001;
###Markdown
Next we train the network using mixup. This process requires more epochs to converge since the training process is more difficult, though should get to around 75% evaluation accuracy.
###Code
mixup_summary = mixup_estimator.fit(summary="mixup")
###Output
______ __ ______ __ _ __
/ ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____
/ /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/
/ __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / /
/_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/
FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved.
FastEstimator-Start: step: 0; total_train_steps: 35000; LeNet_lr: 0.001;
FastEstimator-Train: step: 0; loss: 2.2784815;
FastEstimator-Train: step: 750; loss: 1.4856188; examples/sec: 1524.5; progress: 2.1%;
FastEstimator-Eval: step: 1000; epoch: 0; loss: 1.3363686; min_loss: 1.3363686; since_best_loss: 0; acc: 0.5172;
FastEstimator-Train: step: 1500; loss: 1.2233111; examples/sec: 1456.1; progress: 4.3%;
FastEstimator-Eval: step: 2000; epoch: 1; loss: 1.2171823; min_loss: 1.2171823; since_best_loss: 0; acc: 0.5863;
FastEstimator-Train: step: 2250; loss: 1.1668144; examples/sec: 1406.4; progress: 6.4%;
FastEstimator-Eval: step: 3000; epoch: 2; loss: 1.0894121; min_loss: 1.0894121; since_best_loss: 0; acc: 0.6292;
FastEstimator-Train: step: 3000; loss: 1.2570294; examples/sec: 1492.3; progress: 8.6%;
FastEstimator-Train: step: 3750; loss: 1.3939685; examples/sec: 1502.1; progress: 10.7%;
FastEstimator-Eval: step: 4000; epoch: 3; loss: 1.0413618; min_loss: 1.0413618; since_best_loss: 0; acc: 0.6493;
FastEstimator-Train: step: 4500; loss: 1.8250064; examples/sec: 1450.0; progress: 12.9%;
FastEstimator-Eval: step: 5000; epoch: 4; loss: 1.0359317; min_loss: 1.0359317; since_best_loss: 0; acc: 0.6562;
FastEstimator-Train: step: 5250; loss: 1.7612226; examples/sec: 1371.1; progress: 15.0%;
FastEstimator-Eval: step: 6000; epoch: 5; loss: 0.9976663; min_loss: 0.9976663; since_best_loss: 0; acc: 0.6709;
FastEstimator-Train: step: 6000; loss: 0.9774529; examples/sec: 1471.8; progress: 17.1%;
FastEstimator-Train: step: 6750; loss: 1.9741082; examples/sec: 1495.6; progress: 19.3%;
FastEstimator-Eval: step: 7000; epoch: 6; loss: 0.9284982; min_loss: 0.9284982; since_best_loss: 0; acc: 0.6851;
FastEstimator-Train: step: 7500; loss: 0.9885097; examples/sec: 1484.0; progress: 21.4%;
FastEstimator-Eval: step: 8000; epoch: 7; loss: 0.9322959; min_loss: 0.9284982; since_best_loss: 1; acc: 0.7013;
FastEstimator-Train: step: 8250; loss: 1.6311705; examples/sec: 1508.1; progress: 23.6%;
FastEstimator-Eval: step: 9000; epoch: 8; loss: 0.9396379; min_loss: 0.9284982; since_best_loss: 2; acc: 0.6862;
FastEstimator-Train: step: 9000; loss: 1.6651527; examples/sec: 1521.2; progress: 25.7%;
FastEstimator-Train: step: 9750; loss: 1.7490898; examples/sec: 1533.8; progress: 27.9%;
FastEstimator-Eval: step: 10000; epoch: 9; loss: 0.9156775; min_loss: 0.91567755; since_best_loss: 0; acc: 0.7085;
FastEstimator-Train: step: 10500; loss: 1.8863173; examples/sec: 1499.8; progress: 30.0%;
FastEstimator-Eval: step: 11000; epoch: 10; loss: 0.8908026; min_loss: 0.8908026; since_best_loss: 0; acc: 0.709;
FastEstimator-Train: step: 11250; loss: 1.6603231; examples/sec: 1488.4; progress: 32.1%;
FastEstimator-Eval: step: 12000; epoch: 11; loss: 0.8871776; min_loss: 0.8871776; since_best_loss: 0; acc: 0.7129;
FastEstimator-Train: step: 12000; loss: 0.7897728; examples/sec: 1487.1; progress: 34.3%;
FastEstimator-Train: step: 12750; loss: 1.672348; examples/sec: 1518.3; progress: 36.4%;
FastEstimator-Eval: step: 13000; epoch: 12; loss: 0.891498; min_loss: 0.8871776; since_best_loss: 1; acc: 0.7163;
FastEstimator-Train: step: 13500; loss: 0.9792987; examples/sec: 1502.4; progress: 38.6%;
FastEstimator-Eval: step: 14000; epoch: 13; loss: 0.8771259; min_loss: 0.87712586; since_best_loss: 0; acc: 0.715;
FastEstimator-Train: step: 14250; loss: 0.8483901; examples/sec: 1520.3; progress: 40.7%;
FastEstimator-Eval: step: 15000; epoch: 14; loss: 0.8443844; min_loss: 0.84438443; since_best_loss: 0; acc: 0.7252;
FastEstimator-Train: step: 15000; loss: 1.4111552; examples/sec: 1509.3; progress: 42.9%;
FastEstimator-Train: step: 15750; loss: 0.8739895; examples/sec: 1459.3; progress: 45.0%;
FastEstimator-Eval: step: 16000; epoch: 15; loss: 0.8628039; min_loss: 0.84438443; since_best_loss: 1; acc: 0.7267;
FastEstimator-Train: step: 16500; loss: 0.9846738; examples/sec: 1495.6; progress: 47.1%;
FastEstimator-Eval: step: 17000; epoch: 16; loss: 0.8365366; min_loss: 0.8365366; since_best_loss: 0; acc: 0.7314;
FastEstimator-Train: step: 17250; loss: 1.5324712; examples/sec: 1502.5; progress: 49.3%;
FastEstimator-Eval: step: 18000; epoch: 17; loss: 0.888; min_loss: 0.8365366; since_best_loss: 1; acc: 0.7077;
FastEstimator-Train: step: 18000; loss: 1.6160111; examples/sec: 1421.9; progress: 51.4%;
FastEstimator-Train: step: 18750; loss: 1.2287451; examples/sec: 1509.9; progress: 53.6%;
FastEstimator-Eval: step: 19000; epoch: 18; loss: 0.8122694; min_loss: 0.81226945; since_best_loss: 0; acc: 0.7446;
FastEstimator-Train: step: 19500; loss: 1.8419383; examples/sec: 1513.6; progress: 55.7%;
FastEstimator-Eval: step: 20000; epoch: 19; loss: 0.8234974; min_loss: 0.81226945; since_best_loss: 1; acc: 0.7313;
FastEstimator-Train: step: 20250; loss: 1.6550089; examples/sec: 1517.8; progress: 57.9%;
FastEstimator-Eval: step: 21000; epoch: 20; loss: 0.8587535; min_loss: 0.81226945; since_best_loss: 2; acc: 0.7251;
FastEstimator-Train: step: 21000; loss: 1.4547342; examples/sec: 1459.3; progress: 60.0%;
FastEstimator-Train: step: 21750; loss: 1.4801158; examples/sec: 1379.7; progress: 62.1%;
FastEstimator-Eval: step: 22000; epoch: 21; loss: 0.8384032; min_loss: 0.81226945; since_best_loss: 3; acc: 0.7371;
FastEstimator-Train: step: 22500; loss: 1.3591273; examples/sec: 1545.7; progress: 64.3%;
FastEstimator-Eval: step: 23000; epoch: 22; loss: 0.8036346; min_loss: 0.80363464; since_best_loss: 0; acc: 0.7443;
FastEstimator-Train: step: 23250; loss: 1.0346556; examples/sec: 1518.1; progress: 66.4%;
FastEstimator-Eval: step: 24000; epoch: 23; loss: 0.836709; min_loss: 0.80363464; since_best_loss: 1; acc: 0.7331;
FastEstimator-Train: step: 24000; loss: 1.0664824; examples/sec: 1510.4; progress: 68.6%;
FastEstimator-Train: step: 24750; loss: 1.7216514; examples/sec: 1554.6; progress: 70.7%;
FastEstimator-Eval: step: 25000; epoch: 24; loss: 0.8503023; min_loss: 0.80363464; since_best_loss: 2; acc: 0.7273;
FastEstimator-Train: step: 25500; loss: 0.8602405; examples/sec: 1530.3; progress: 72.9%;
FastEstimator-Eval: step: 26000; epoch: 25; loss: 0.8132642; min_loss: 0.80363464; since_best_loss: 3; acc: 0.7411;
FastEstimator-Train: step: 26250; loss: 1.4390676; examples/sec: 1513.1; progress: 75.0%;
FastEstimator-Eval: step: 27000; epoch: 26; loss: 0.8023642; min_loss: 0.8023642; since_best_loss: 0; acc: 0.7435;
FastEstimator-Train: step: 27000; loss: 1.7524374; examples/sec: 1540.7; progress: 77.1%;
FastEstimator-Train: step: 27750; loss: 1.3336357; examples/sec: 1594.5; progress: 79.3%;
FastEstimator-Eval: step: 28000; epoch: 27; loss: 0.8116911; min_loss: 0.8023642; since_best_loss: 1; acc: 0.7395;
FastEstimator-Train: step: 28500; loss: 1.0761751; examples/sec: 1556.6; progress: 81.4%;
FastEstimator-Eval: step: 29000; epoch: 28; loss: 0.8441216; min_loss: 0.8023642; since_best_loss: 2; acc: 0.7324;
FastEstimator-Train: step: 29250; loss: 1.1422424; examples/sec: 1554.9; progress: 83.6%;
FastEstimator-Eval: step: 30000; epoch: 29; loss: 0.8132516; min_loss: 0.8023642; since_best_loss: 3; acc: 0.7362;
FastEstimator-Train: step: 30000; loss: 1.694069; examples/sec: 1553.7; progress: 85.7%;
FastEstimator-Train: step: 30750; loss: 0.6191846; examples/sec: 1560.7; progress: 87.9%;
FastEstimator-Eval: step: 31000; epoch: 30; loss: 0.8216836; min_loss: 0.8023642; since_best_loss: 4; acc: 0.7424;
FastEstimator-Train: step: 31500; loss: 0.7421934; examples/sec: 1544.9; progress: 90.0%;
FastEstimator-Eval: step: 32000; epoch: 31; loss: 0.8028535; min_loss: 0.8023642; since_best_loss: 5; acc: 0.741;
FastEstimator-Train: step: 32250; loss: 0.745209; examples/sec: 1552.0; progress: 92.1%;
FastEstimator-Eval: step: 33000; epoch: 32; loss: 0.8225112; min_loss: 0.8023642; since_best_loss: 6; acc: 0.7458;
FastEstimator-Train: step: 33000; loss: 1.7576869; examples/sec: 1546.8; progress: 94.3%;
FastEstimator-Train: step: 33750; loss: 1.3495349; examples/sec: 1543.0; progress: 96.4%;
FastEstimator-Eval: step: 34000; epoch: 33; loss: 0.8259973; min_loss: 0.8023642; since_best_loss: 7; acc: 0.7348;
FastEstimator-Train: step: 34500; loss: 1.6177179; examples/sec: 1439.8; progress: 98.6%;
FastEstimator-Eval: step: 35000; epoch: 34; loss: 0.8024238; min_loss: 0.8023642; since_best_loss: 8; acc: 0.741;
FastEstimator-Finish: step: 35000; total_time: 1227.67 sec; LeNet_lr: 0.001;
###Markdown
Step 5: Comparing Results As the performance logs make clear, the mixup training method is extremely effective in combatting overfitting. Whereas the regular model begins to overfit around epoch 7, the network with mixup training continues to improve even after 35 epochs.
###Code
from fastestimator.summary import visualize_logs
visualize_logs([simple_summary, mixup_summary], ignore_metrics={"LeNet_lr"})
###Output
_____no_output_____
###Markdown
CIFAR10 Image Classification Using LeNet With Mixup Training In this tutorial, we are going to walk through the logic in `lenet_cifar10_mixup.py` shown below and provide step-by-step instructions.
###Code
!cat lenet_cifar10_mixup.py
###Output
_____no_output_____
###Markdown
Step 1: Prepare training and evaluation dataset, create FastEstimator `Pipeline` `Pipeline` can take both data in memory and data in disk. In this example, we are going to use data in memory by loading data with `tf.keras.datasets.cifar10`
###Code
import tensorflow as tf
(x_train, y_train), (x_eval, y_eval) = tf.keras.datasets.cifar10.load_data()
print("train image shape is {}".format(x_train.shape))
print("train label shape is {}".format(y_train.shape))
print("eval image shape is {}".format(x_eval.shape))
print("eval label shape is {}".format(y_eval.shape))
###Output
train image shape is (50000, 32, 32, 3)
train label shape is (50000, 1)
eval image shape is (10000, 32, 32, 3)
eval label shape is (10000, 1)
###Markdown
For in-memory data in `Pipeline`, the data format should be a nested dictionary like: {"mode1": {"feature1": numpy_array, "feature2": numpy_array, ...}, ...}. Each `mode` can be either `train` or `eval`, in our case, we have both `train` and `eval`. `feature` is the feature name, in our case, we have `x` and `y`.
###Code
data = {"train": {"x": x_train, "y": y_train}, "eval": {"x": x_eval, "y": y_eval}}
#Parameters
epochs = 35
batch_size = 50
steps_per_epoch = None
validation_steps = None
###Output
_____no_output_____
###Markdown
Now we are ready to define `Pipeline`, we want to apply a `Minmax` online preprocessing to the image feature `x` for both training and evaluation:
###Code
import fastestimator as fe
from fastestimator.op.tensorop import Minmax
pipeline = fe.Pipeline(batch_size=batch_size, data=data, ops=Minmax(inputs="x", outputs="x"))
###Output
_____no_output_____
###Markdown
Step 2: Prepare model, create FastEstimator `Network` First, we have to define the network architecture in `tf.keras.Model` or `tf.keras.Sequential`, for a popular architecture like LeNet, FastEstimator has it implemented already in [fastestimator.architecture.lenet](https://github.com/fastestimator/fastestimator/blob/master/fastestimator/architecture/lenet.py). After defining the architecture, users are expected to feed the architecture definition and its associated model name, optimizer and loss name (default to be 'loss') to `FEModel`.
###Code
from fastestimator.architecture import LeNet
model = fe.build(model_def=lambda: LeNet(input_shape=x_train.shape[1:],
classes=10), model_name="LeNet",
optimizer="adam",
loss_name="loss")
###Output
_____no_output_____
###Markdown
We can now define a simple `Network`: given with a batch data with key `x` and `y`, we have to work our way to `loss` with series of operators. `ModelOp` is an operator that contains a model.
###Code
from fastestimator.op.tensorop import ModelOp, SparseCategoricalCrossentropy
simple_network = fe.Network(ops=[ModelOp(inputs="x", model=model, outputs="y_pred"),
SparseCategoricalCrossentropy(y_pred="y_pred", y_true="y", outputs="loss")])
###Output
_____no_output_____
###Markdown
One advantage of `FastEstimator`, though, is that it is easy to construct much more complicated graphs. In this example, we want to conduct training by [mixing up input images](https://arxiv.org/abs/1710.09412), since this has been shown to make neural networks more robust against adversarial attacks, as well as helping to prevent over-fitting. To achieve this in `FastEstimator`, we start by randomly pairing and linearly combining inputs, then feeding the mixed images to the `ModelOp` before computing the loss. Note that mixup is only performed during training (not evaluation), and so the mode on the mix-related operations is set to 'train'. We use schedulers to enable mixup only after the first epoch, since it takes quite a while to converge otherwise.
###Code
from tensorflow.python.keras.losses import SparseCategoricalCrossentropy as KerasCrossentropy
from fastestimator.op.tensorop import MixUpBatch, MixUpLoss
from fastestimator.schedule import Scheduler
pipeline2 = fe.Pipeline(batch_size=batch_size, data=data, ops=Minmax(inputs="x", outputs="x"))
model2 = fe.build(model_def=lambda: LeNet(input_shape=x_train.shape[1:], classes=10), model_name="LeNet",
optimizer="adam", loss_name="loss")
warmup = 1
mixup_network = fe.Network(ops=[
Scheduler({warmup: MixUpBatch(inputs="x", outputs=["x", "lambda"], alpha=1, mode="train")}),
ModelOp(inputs="x", model=model2, outputs="y_pred"),
Scheduler({0: SparseCategoricalCrossentropy(y_true="y", y_pred="y_pred", mode="train", outputs="loss"),
warmup: MixUpLoss(KerasCrossentropy(), lam="lambda", y_true="y", y_pred="y_pred", mode="train", outputs="loss")}),
SparseCategoricalCrossentropy(y_true="y", y_pred="y_pred", mode="eval", outputs="loss")
])
###Output
_____no_output_____
###Markdown
Step 3: Configure training, create `Estimator` During the training loop, we want to: 1) measure accuracy for data data 2) save the model with lowest valdiation loss. The `Trace` class is used for anything related to the training loop, and we will need to import the `Accuracy` and `ModelSaver` traces.
###Code
import tempfile
import os
from fastestimator.trace import Accuracy, ModelSaver
simple_traces = [Accuracy(true_key="y", pred_key="y_pred", output_name='acc')]
mixup_traces = [Accuracy(true_key="y", pred_key="y_pred", output_name='acc')]
###Output
_____no_output_____
###Markdown
Now we can define the `Estimator` and specify the training configuation. We will create estimators for both the simple and adversarial networks in order to compare their performances.
###Code
simple_estimator = fe.Estimator(network=simple_network,
pipeline=pipeline,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
traces=simple_traces,
log_steps=750)
mixup_estimator = fe.Estimator(network=mixup_network,
pipeline=pipeline2,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
traces=mixup_traces,
log_steps=750)
###Output
_____no_output_____
###Markdown
Step 4: Training We'll start by training the regular network (takes about 20 minutes on a 2015 MacBookPro CPU - 2.5 GHz Intel Core i7). The network should attain a peak evaluation accuracy around 72%
###Code
simple_summary = simple_estimator.fit(summary="simple")
###Output
_____no_output_____
###Markdown
Next we train the network using mixup. This process requires more epochs to converge since the training process is more difficult, though should get to around 75% evaluation accuracy.
###Code
mixup_summary = mixup_estimator.fit(summary="mixup")
###Output
_____no_output_____
###Markdown
Step 5: Comparing Results As the performance logs make clear, the mixup training method is extremely effective in combatting overfitting. Whereas the regular model begins to overfit around epoch 7, the network with mixup training continues to improve even after 35 epochs.
###Code
from fastestimator.summary import visualize_logs
visualize_logs([simple_summary, mixup_summary], ignore_metrics={"LeNet_lr"})
###Output
_____no_output_____
|
notebooks/logging_example.ipynb
|
###Markdown
Load data
###Code
# Load some data
df = pd.read_csv('lending_club_1000.csv')
# Split into a test & training set
df_training = df.sample(int(len(df) * 0.8), replace=False, random_state=123)
df_test = df.drop(df_training.index)
df.head()
###Output
_____no_output_____
###Markdown
Log dataset sketches
###Code
from whylabs.logs import get_or_create_session, get_logger
s = get_or_create_session(
output_to_disk=True,
output_flat_summary=True,
output_to_cloud=False, # For now, we won't output to the cloud
bucket='whylabs-isaac', # although we can still configure cloud output
cloud_output_folder='test/logging',
)
logger = get_logger()
# Available config options
s.config
###Output
_____no_output_____
###Markdown
Log dataframe
###Code
logger.log_dataframe(df_training, 'training.data')
# Then you could do whatever training or calculations you'd like
###Output
_____no_output_____
###Markdown
Inspect profiles/statistics
###Code
# You can also capture the logger response and interact with the generated
# profiles
# Log the test data
response = logger.log_dataframe(df_test, 'test.data')
# Inspect the dataset profile sketch
prof = response['profile']
summary = prof.flat_summary()
stats_df = summary['summary']
stats_df
# See one of the inspected histograms
hist_data = summary['hist']['fico_range_high']
bins = hist_data['bin_edges']
n = hist_data['counts']
bin_width = np.diff(bins)
plt.bar(bins[0:-1], n, bin_width, align='edge')
###Output
_____no_output_____
###Markdown
Load logged data
###Code
import glob
###Output
_____no_output_____
###Markdown
Load flat table statistics
###Code
# Load the flat table statistics from the 'test.data' dataset
fnames = glob.glob('whylogs/test.data/dataset_summary/flat_table/dataset_summary*.csv')
fnames.sort()
# Load the most recent file
test_stats = pd.read_csv(fnames[-1])
test_stats
###Output
_____no_output_____
###Markdown
Load the full dataset profile sketch
###Code
from whylabs.logs.core import datasetprofile
# Load a dataset profile from the 'test.data' dataset
fnames = glob.glob('whylogs/test.data/dataset_profile/protobuf/*.bin')
fnames.sort()
with open(fnames[-1], 'rb') as fp:
test_prof = datasetprofile.DatasetProfile.from_protobuf_string(fp.read())
###Output
_____no_output_____
###Markdown
---
###Code
# Not necessary, but you can reset the whylogs session if you want
from whylabs.logs.app.session import reset_session
reset_session()
###Output
_____no_output_____
###Markdown
Load data
###Code
# Load some data
df = pd.read_csv('lending_club_1000.csv')
# Split into a test & training set
df_training = df.sample(int(len(df) * 0.8), replace=False, random_state=123)
df_test = df.drop(df_training.index)
df.head()
###Output
_____no_output_____
###Markdown
Log dataset sketches
###Code
from whylabs.logs import get_or_create_session, get_logger
s = get_or_create_session(
output_to_disk=True,
output_flat_summary=True,
output_to_cloud=False, # For now, we won't output to the cloud
bucket='whylabs-isaac', # although we can still configure cloud output
cloud_output_folder='test/logging',
)
logger = get_logger()
# Available config options
s.config
###Output
_____no_output_____
###Markdown
Log dataframe
###Code
logger.log_dataframe(df_training, 'training.data')
# Then you could do whatever training or calculations you'd like
###Output
_____no_output_____
###Markdown
Inspect profiles/statistics
###Code
# You can also capture the logger response and interact with the generated
# profiles
# Log the test data
response = logger.log_dataframe(df_test, 'test.data')
# Inspect the dataset profile sketch
prof = response['profile']
summary = prof.flat_summary()
stats_df = summary['summary']
stats_df
# See one of the inspected histograms
hist_data = summary['hist']['fico_range_high']
bins = hist_data['bin_edges']
n = hist_data['counts']
bin_width = np.diff(bins)
plt.bar(bins[0:-1], n, bin_width, align='edge')
###Output
_____no_output_____
###Markdown
Load logged data
###Code
import glob
###Output
_____no_output_____
###Markdown
Load flat table statistics
###Code
# Load the flat table statistics from the 'test.data' dataset
fnames = glob.glob('whylogs/test.data/dataset_summary/flat_table/dataset_summary*.csv')
fnames.sort()
# Load the most recent file
test_stats = pd.read_csv(fnames[-1])
test_stats
###Output
_____no_output_____
###Markdown
Load the full dataset profile sketch
###Code
from whylabs.logs.core import datasetprofile
# Load a dataset profile from the 'test.data' dataset
fnames = glob.glob('whylogs/test.data/dataset_profile/protobuf/*.bin')
fnames.sort()
with open(fnames[-1], 'rb') as fp:
test_prof = datasetprofile.DatasetProfile.from_protobuf_string(fp.read())
###Output
_____no_output_____
###Markdown
---
###Code
# Not necessary, but you can reset the WhyLogs session if you want
from whylabs.logs.app.session import reset_session
reset_session()
###Output
_____no_output_____
|
templates/MBKmeansFttPmeans.ipynb
|
###Markdown
Saving the output data into vars
###Code
centroids = model.cluster_centers_
labels = model.labels_
megadf["clusterlabel"]=labels
centroidDF = pd.DataFrame(centroids)
###Output
_____no_output_____
###Markdown
Plotting
###Code
plt.figure(figsize=(16,16))
titlestring = "{} with k={} records={} features={} using {}".format(__algo__, k, data.shape[0], data.shape[1], __emb__)
snsplot = sns.countplot("clusterlabel", data=megadf)
snsplot.xaxis.label.set_size(20)
snsplot.yaxis.label.set_size(20)
plt.title(
titlestring,
fontdict = {'fontsize' : 30}
)
###Output
_____no_output_____
###Markdown
*Name given to saved files*
###Code
features = data.shape[1]
records = data.shape[0]
name = "{}_{}_{}_K{}_R{}_F{}".format(__algo__, __emb__, __sentemb__, k, records, features)
name
###Output
_____no_output_____
###Markdown
Saving Data Save model
###Code
modelname = "{}_model.pkl".format(name)
pickle.dump(model, open(modelDir + modelname, 'wb'))
###Output
_____no_output_____
###Markdown
Save Plot
###Code
snspltname = "{}_plt.png".format(name)
snsplot.figure.savefig(plotDir + snspltname)
###Output
_____no_output_____
###Markdown
Save Megadf
###Code
clusterdfname = "{}_clustered_megadf.pkl".format(name)
megadf.to_pickle(megadfDir + clusterdfname)
###Output
_____no_output_____
###Markdown
Save Centroids
###Code
centroidDF = pd.DataFrame(centroids)
centroidDFname = "{}_centroids.pkl".format(name)
centroidDF.to_pickle(megadfDir + centroidDFname)
print(centroidDF.shape)
###Output
(50, 1500)
###Markdown
Open dataframe to test
###Code
sub = megadf.loc[:, ["id", "title", "abstract", "clusterlabel"]]
sub.tail()
megadf.columns
###Output
_____no_output_____
###Markdown
Performance Testing and Distribution
###Code
metadata = pd.DataFrame(columns=["Name", "Algo", "WordEmb", "SentEmb", "K", "R", "F", "SS", "CSavg", "CSmin", "CSmax", "T2Pavg", "T2LM", "T2LMP", "MEM"])
metadict = {
"Name":None, #Name of the save file prefix
"Algo":None, #Name of the Clustering algorithm
"WordEmb":None, #Name of the Word Embeddings used (glove, w2v, ftt)
"SentEmb":None, #Name of Sentence Embedding algorithm used
"K":None, "R":None, "F":None, #Number of clusters, records and fetures
"T2T":None, #Time required to train model
"SS":None, #Silhoutte Score
"DBS":None, #Davis Bouldin Score
"CSavg":None, #Average Cluster Size
"CSmin":None, #Minimum Cluster Size
"CSmax":None, #Maximum Cluster Size
"T2Pavg":None, #Average Time To Predict cluster of one record
"T2LM":None, #Average Time to Load Model
"T2LMP":None, #Amortized time to Predict after loading the model
"MEM":None #Memory used by the Model
}
metadict
metadict["Name"]=name
metadict["Algo"]=__algo__
metadict["WordEmb"]=__emb__
metadict["SentEmb"]=__sentemb__
metadict["K"]=k
metadict["R"]=recnum
metadict["F"]=features
metadict
###Output
_____no_output_____
###Markdown
Time to train
###Code
metadict["T2T"]=timetrain
###Output
_____no_output_____
###Markdown
Scores
###Code
ss = silhouette_score(data, labels, metric = 'euclidean')
dbs = davies_bouldin_score(data, labels)
metadict["SS"]=ss
metadict["DBS"]=dbs
###Output
_____no_output_____
###Markdown
Cluster Size
###Code
clusterdata = megadf.groupby("clusterlabel", as_index=True).size().reset_index(name="count")
clusterdata.head()
clusterdfname = "{}_clustered_counts.pkl".format(name)
clusterdata.to_pickle(megadfDir + clusterdfname)
countdata = clusterdata.groupby("count").size().reset_index(name="clusters")
display(countdata.head(3))
display(countdata.tail(3))
metadict["CSmax"] = max(clusterdata["count"])
metadict["CSmin"] = min(clusterdata["count"])
metadict["CSavg"] = np.mean(clusterdata["count"])
%matplotlib inline
plt.figure(figsize=(16,16))
sns.axes_style("whitegrid", {"axes.grid":True,
'axes.spines.left': False,
'axes.spines.bottom': False,
'axes.spines.right': False,
'axes.spines.top': False})
titlestring = "{}_Cluster_Distribution".format(name)
snsplot = sns.distplot(clusterdata["count"], kde=False, bins=max(clusterdata["count"]),
hist_kws={'edgecolor':'black'},)
snsplot.set(xlabel="Number of Papers", ylabel="Number of Clusters")
snsplot.xaxis.label.set_size(20)
snsplot.yaxis.label.set_size(20)
plt.title(
titlestring,
fontdict = {'fontsize' : 25}
)
plt.show()
snspltname = "{}_Cluster_Distribution.png".format(name)
snsplot.figure.savefig(plotDir + snspltname)
###Output
_____no_output_____
###Markdown
Prediction Time Performance
###Code
testdf = pd.DataFrame()
if recnum < 2000:
samplenum = int(recnum / 10)
else:
samplenum = 2000
for f in smalllist:
tempdf = pd.read_pickle(f)
testdf = megadf.append(tempdf, ignore_index = True, sort = False)
testdf = testdf.sample(samplenum, random_state=int(time.time()%100000))
predata = testdf["embedding"]
data = np.matrix(predata.to_list())
print(data.shape)
print("Starting Predicting Performance")
testmodel = model
start_time = time.time()
for d in data:
lb = testmodel.predict(d)
end_time = time.time()
timetest = end_time-start_time
avgtime = timetest/data.shape[0]
print("Avgtime: {} Totaltime: {}".format(avgtime, timetest))
metadict["T2Pavg"]=avgtime
print("Starting Loading Performance")
loadruns = 50
start_time = time.time()
for i in range(loadruns):
testmodel = pickle.load(open(modelDir + modelname, 'rb'))
end_time = time.time()
timetest = end_time-start_time
avgtime = timetest/loadruns
print("Avgtime: {} Totaltime: {}".format(avgtime, timetest))
metadict["T2LM"] = avgtime
avgtime
print("Starting Amortized Performance")
loadruns = 5
avglist = []
for i in range(loadruns):
start_time = time.time()
testmodel = pickle.load(open(modelDir + modelname, 'rb'))
for d in data:
lb = testmodel.predict(d)
end_time = time.time()
timetest = (end_time-start_time)/data.shape[0]
avglist.append(timetest)
timetest = np.sum(avglist)
avgtime = np.mean(avglist)
print("Avgtime: {} Totaltime: {}".format(avgtime, timetest))
metadict["T2LMP"] = avgtime
avgtime
modelsize = sys.getsizeof(pickle.dumps(model))
print("modelsize:", modelsize, "bytes")
metadict["MEM"]=modelsize
metadict
metadata = metadata.append(metadict, ignore_index=True)
metadata
metadataname = "{}_metadata.pkl".format(name)
metadata.to_pickle(metadataDir + metadataname)
!cd ..
!find / -type f -name "*_metadata.pkl"
%cd ..
!cp /content/modelMetaData/MBKMEANS_ftt_pmeans5_K50_R3000_F1500_metadata.pkl /content/drive/My\ Drive/.
###Output
_____no_output_____
###Markdown
***Set parameters***
###Code
__algo__ = "MBKMEANS" #Name of the Clustering algorithm
__emb__ = "ftt" #Name of the Word Embeddings used (glove, w2v, ftt), MUST set directory below
__sentemb__ = "pmeans5" #Name of Sentence Embedding algorithm used
recnum = 3000 #Number of records to be read from files
k = 50 #Number of Clusters
usesqrt = False #Set value of k to sqrt of recnum, overrides k
randomsample = False #Random Sampling to be True/False for records which are read
embedDir = "../MegaSentEmbs/MegaSentEmbs/" #Directory where embeddings are saved for that selected embedding
modelDir = "../models/" #Directory where models are saved
megadfDir = "../MegaDfs/" #Directory Where Megadf is to be saved
plotDir = "../plots/" #Directory where plots are saved
metadataDir = "../modelMetaData/" #Directory where performance and distribution params are to be stored
dumpDir = "../dump/" #Directory where test outcomes are saved
###Output
_____no_output_____
###Markdown
Actual Code imports and time
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import cluster, datasets
from sklearn.metrics import silhouette_score, davies_bouldin_score
import seaborn as sns
import os, subprocess, sys
import datetime, time
import pickle
###Output
_____no_output_____
###Markdown
File Settings
###Code
oldlist = os.listdir(embedDir)
filelist = sorted([embedDir+f for f in oldlist if f[-3:]=="pkl"])
filenum = len(filelist)
smalllist = filelist[:filenum]
print("Length of Smalllist: ", len(smalllist))
###Output
Length of Smalllist: 123
###Markdown
Number of RecordsIt is Recommended to Set this at the top parameters
###Code
recnum = recnum
###Output
_____no_output_____
###Markdown
Read all the pandas dataframes
###Code
%%time
megadf = pd.DataFrame()
if randomsample == True:
print("randomsample: ", randomsample)
for f in smalllist:
tempdf = pd.read_pickle(f)
megadf = megadf.append(tempdf, ignore_index = True)
megadf = megadf.sample(recnum, random_state=42)
else:
print("randomsample: ", randomsample)
for f in smalllist:
tempdf = pd.read_pickle(f)
megadf = megadf.append(tempdf, ignore_index = True)
if megadf.shape[0] >= recnum:
megadf = megadf[:recnum]
break
print("megadf.shape: ", megadf.shape)
predata = megadf["embedding"]
data = np.matrix(predata.to_list())
print(data.shape)
###Output
(3000, 1500)
###Markdown
Number of ClustersIt is Recommended to Set this at the top parameters
###Code
if usesqrt == True:
print("usesqrt: ", usesqrt)
sqrt_k = int(np.sqrt(data.shape[0]))
k = int(sqrt_k)
else:
print("usesqrt: ", usesqrt)
k = k
print("k: ", k)
###Output
usesqrt: False
k: 50
###Markdown
ClusteringPlease modify the functions here to change algorithm
###Code
%%time
print("Starting Clustering Process")
start_time = time.time()
model = cluster.MiniBatchKMeans(n_clusters=k, n_init = 20, random_state=42, batch_size=32, max_iter=1000, verbose=1)
model.fit(data)
end_time = time.time()
timetrain = round(end_time-start_time, 2)
print("done! {}".format(timetrain))
print("k_means.fit(data) Done!")
###Output
Starting Clustering Process
Init 1/20 with method: k-means++
Inertia for init 1/20: 716.080461
Init 2/20 with method: k-means++
Inertia for init 2/20: 690.267355
Init 3/20 with method: k-means++
Inertia for init 3/20: 760.426715
Init 4/20 with method: k-means++
Inertia for init 4/20: 689.231336
Init 5/20 with method: k-means++
Inertia for init 5/20: 724.978563
Init 6/20 with method: k-means++
Inertia for init 6/20: 712.096234
Init 7/20 with method: k-means++
Inertia for init 7/20: 694.766271
Init 8/20 with method: k-means++
Inertia for init 8/20: 746.612917
Init 9/20 with method: k-means++
Inertia for init 9/20: 763.161511
Init 10/20 with method: k-means++
Inertia for init 10/20: 678.606519
Init 11/20 with method: k-means++
Inertia for init 11/20: 744.304804
Init 12/20 with method: k-means++
Inertia for init 12/20: 727.933918
Init 13/20 with method: k-means++
Inertia for init 13/20: 741.147767
Init 14/20 with method: k-means++
Inertia for init 14/20: 706.395009
Init 15/20 with method: k-means++
Inertia for init 15/20: 729.062518
Init 16/20 with method: k-means++
Inertia for init 16/20: 754.673504
Init 17/20 with method: k-means++
Inertia for init 17/20: 669.037259
Init 18/20 with method: k-means++
Inertia for init 18/20: 671.659697
Init 19/20 with method: k-means++
Inertia for init 19/20: 742.262068
Init 20/20 with method: k-means++
Inertia for init 20/20: 687.260708
Minibatch iteration 1/94000: mean batch inertia: 14.081355, ewa inertia: 14.081355
Minibatch iteration 2/94000: mean batch inertia: 12.479290, ewa inertia: 14.047189
Minibatch iteration 3/94000: mean batch inertia: 11.533647, ewa inertia: 13.993584
Minibatch iteration 4/94000: mean batch inertia: 10.716021, ewa inertia: 13.923686
Minibatch iteration 5/94000: mean batch inertia: 16.051583, ewa inertia: 13.969066
Minibatch iteration 6/94000: mean batch inertia: 11.833076, ewa inertia: 13.923514
Minibatch iteration 7/94000: mean batch inertia: 11.802464, ewa inertia: 13.878280
Minibatch iteration 8/94000: mean batch inertia: 13.393167, ewa inertia: 13.867934
Minibatch iteration 9/94000: mean batch inertia: 10.906224, ewa inertia: 13.804772
[MiniBatchKMeans] Reassigning 15 cluster centers.
Minibatch iteration 10/94000: mean batch inertia: 13.183741, ewa inertia: 13.791528
Minibatch iteration 11/94000: mean batch inertia: 11.474483, ewa inertia: 13.742114
Minibatch iteration 12/94000: mean batch inertia: 12.344569, ewa inertia: 13.712309
Minibatch iteration 13/94000: mean batch inertia: 12.529218, ewa inertia: 13.687079
Minibatch iteration 14/94000: mean batch inertia: 11.946227, ewa inertia: 13.649953
Minibatch iteration 15/94000: mean batch inertia: 12.486944, ewa inertia: 13.625150
Minibatch iteration 16/94000: mean batch inertia: 12.027948, ewa inertia: 13.591088
Minibatch iteration 17/94000: mean batch inertia: 11.870138, ewa inertia: 13.554387
Minibatch iteration 18/94000: mean batch inertia: 11.694617, ewa inertia: 13.514725
Minibatch iteration 19/94000: mean batch inertia: 11.444571, ewa inertia: 13.470576
Minibatch iteration 20/94000: mean batch inertia: 11.663369, ewa inertia: 13.432035
Minibatch iteration 21/94000: mean batch inertia: 12.266912, ewa inertia: 13.407188
[MiniBatchKMeans] Reassigning 16 cluster centers.
Minibatch iteration 22/94000: mean batch inertia: 12.101696, ewa inertia: 13.379346
Minibatch iteration 23/94000: mean batch inertia: 11.429093, ewa inertia: 13.337755
Minibatch iteration 24/94000: mean batch inertia: 11.032372, ewa inertia: 13.288590
Minibatch iteration 25/94000: mean batch inertia: 12.496520, ewa inertia: 13.271698
Minibatch iteration 26/94000: mean batch inertia: 10.568582, ewa inertia: 13.214051
Minibatch iteration 27/94000: mean batch inertia: 11.625436, ewa inertia: 13.180171
Minibatch iteration 28/94000: mean batch inertia: 12.056845, ewa inertia: 13.156215
Minibatch iteration 29/94000: mean batch inertia: 10.738427, ewa inertia: 13.104653
Minibatch iteration 30/94000: mean batch inertia: 10.751138, ewa inertia: 13.054461
Minibatch iteration 31/94000: mean batch inertia: 11.959150, ewa inertia: 13.031102
Minibatch iteration 32/94000: mean batch inertia: 11.335359, ewa inertia: 12.994939
[MiniBatchKMeans] Reassigning 16 cluster centers.
Minibatch iteration 33/94000: mean batch inertia: 12.256498, ewa inertia: 12.979190
Minibatch iteration 34/94000: mean batch inertia: 12.073118, ewa inertia: 12.959867
Minibatch iteration 35/94000: mean batch inertia: 11.545742, ewa inertia: 12.929709
Minibatch iteration 36/94000: mean batch inertia: 11.676119, ewa inertia: 12.902975
Minibatch iteration 37/94000: mean batch inertia: 12.497614, ewa inertia: 12.894330
Minibatch iteration 38/94000: mean batch inertia: 11.067435, ewa inertia: 12.855369
Minibatch iteration 39/94000: mean batch inertia: 12.574830, ewa inertia: 12.849387
Minibatch iteration 40/94000: mean batch inertia: 11.492686, ewa inertia: 12.820453
Minibatch iteration 41/94000: mean batch inertia: 11.576354, ewa inertia: 12.793921
Minibatch iteration 42/94000: mean batch inertia: 12.506592, ewa inertia: 12.787794
Minibatch iteration 43/94000: mean batch inertia: 12.348061, ewa inertia: 12.778416
[MiniBatchKMeans] Reassigning 16 cluster centers.
Minibatch iteration 44/94000: mean batch inertia: 12.444002, ewa inertia: 12.771284
Minibatch iteration 45/94000: mean batch inertia: 12.028808, ewa inertia: 12.755450
Minibatch iteration 46/94000: mean batch inertia: 11.479543, ewa inertia: 12.728240
Minibatch iteration 47/94000: mean batch inertia: 11.186082, ewa inertia: 12.695351
Minibatch iteration 48/94000: mean batch inertia: 12.171577, ewa inertia: 12.684181
Minibatch iteration 49/94000: mean batch inertia: 11.996319, ewa inertia: 12.669512
Minibatch iteration 50/94000: mean batch inertia: 11.768846, ewa inertia: 12.650304
Minibatch iteration 51/94000: mean batch inertia: 11.535296, ewa inertia: 12.626525
Minibatch iteration 52/94000: mean batch inertia: 10.979254, ewa inertia: 12.591395
Minibatch iteration 53/94000: mean batch inertia: 10.066101, ewa inertia: 12.537540
Minibatch iteration 54/94000: mean batch inertia: 11.298627, ewa inertia: 12.511118
[MiniBatchKMeans] Reassigning 16 cluster centers.
Minibatch iteration 55/94000: mean batch inertia: 11.751577, ewa inertia: 12.494920
Minibatch iteration 56/94000: mean batch inertia: 11.006111, ewa inertia: 12.463170
Minibatch iteration 57/94000: mean batch inertia: 11.269034, ewa inertia: 12.437703
Minibatch iteration 58/94000: mean batch inertia: 11.109018, ewa inertia: 12.409367
Minibatch iteration 59/94000: mean batch inertia: 11.784090, ewa inertia: 12.396033
[MiniBatchKMeans] Reassigning 16 cluster centers.
Minibatch iteration 60/94000: mean batch inertia: 11.521606, ewa inertia: 12.377384
Minibatch iteration 61/94000: mean batch inertia: 11.582916, ewa inertia: 12.360441
Minibatch iteration 62/94000: mean batch inertia: 12.407711, ewa inertia: 12.361449
Minibatch iteration 63/94000: mean batch inertia: 11.598081, ewa inertia: 12.345170
Minibatch iteration 64/94000: mean batch inertia: 12.897546, ewa inertia: 12.356950
Minibatch iteration 65/94000: mean batch inertia: 12.764980, ewa inertia: 12.365651
Minibatch iteration 66/94000: mean batch inertia: 12.658345, ewa inertia: 12.371894
Minibatch iteration 67/94000: mean batch inertia: 11.770112, ewa inertia: 12.359060
Minibatch iteration 68/94000: mean batch inertia: 10.576079, ewa inertia: 12.321036
Minibatch iteration 69/94000: mean batch inertia: 11.254474, ewa inertia: 12.298290
Minibatch iteration 70/94000: mean batch inertia: 12.085465, ewa inertia: 12.293751
Minibatch iteration 71/94000: mean batch inertia: 11.535460, ewa inertia: 12.277580
[MiniBatchKMeans] Reassigning 16 cluster centers.
Minibatch iteration 72/94000: mean batch inertia: 12.767467, ewa inertia: 12.288027
Minibatch iteration 73/94000: mean batch inertia: 11.735259, ewa inertia: 12.276239
Minibatch iteration 74/94000: mean batch inertia: 11.683545, ewa inertia: 12.263599
Minibatch iteration 75/94000: mean batch inertia: 12.241721, ewa inertia: 12.263132
Minibatch iteration 76/94000: mean batch inertia: 10.011258, ewa inertia: 12.215108
Minibatch iteration 77/94000: mean batch inertia: 9.130214, ewa inertia: 12.149319
Minibatch iteration 78/94000: mean batch inertia: 13.262750, ewa inertia: 12.173064
Minibatch iteration 79/94000: mean batch inertia: 11.243392, ewa inertia: 12.153238
Minibatch iteration 80/94000: mean batch inertia: 9.847285, ewa inertia: 12.104061
Minibatch iteration 81/94000: mean batch inertia: 11.496850, ewa inertia: 12.091111
Minibatch iteration 82/94000: mean batch inertia: 9.870761, ewa inertia: 12.043759
Minibatch iteration 83/94000: mean batch inertia: 12.575337, ewa inertia: 12.055096
[MiniBatchKMeans] Reassigning 16 cluster centers.
Minibatch iteration 84/94000: mean batch inertia: 11.905405, ewa inertia: 12.051904
Minibatch iteration 85/94000: mean batch inertia: 11.661610, ewa inertia: 12.043580
Minibatch iteration 86/94000: mean batch inertia: 10.388805, ewa inertia: 12.008290
Minibatch iteration 87/94000: mean batch inertia: 10.652786, ewa inertia: 11.979382
Minibatch iteration 88/94000: mean batch inertia: 11.487881, ewa inertia: 11.968900
Minibatch iteration 89/94000: mean batch inertia: 11.386580, ewa inertia: 11.956482
Minibatch iteration 90/94000: mean batch inertia: 11.343468, ewa inertia: 11.943408
[MiniBatchKMeans] Reassigning 16 cluster centers.
Minibatch iteration 91/94000: mean batch inertia: 10.353478, ewa inertia: 11.909501
Minibatch iteration 92/94000: mean batch inertia: 10.269360, ewa inertia: 11.874523
Minibatch iteration 93/94000: mean batch inertia: 11.587048, ewa inertia: 11.868392
Minibatch iteration 94/94000: mean batch inertia: 11.441170, ewa inertia: 11.859281
Minibatch iteration 95/94000: mean batch inertia: 10.671689, ewa inertia: 11.833954
Minibatch iteration 96/94000: mean batch inertia: 11.499542, ewa inertia: 11.826823
Minibatch iteration 97/94000: mean batch inertia: 11.822805, ewa inertia: 11.826737
Minibatch iteration 98/94000: mean batch inertia: 11.229689, ewa inertia: 11.814004
Minibatch iteration 99/94000: mean batch inertia: 12.241952, ewa inertia: 11.823131
Minibatch iteration 100/94000: mean batch inertia: 10.873039, ewa inertia: 11.802869
Minibatch iteration 101/94000: mean batch inertia: 11.444828, ewa inertia: 11.795233
Minibatch iteration 102/94000: mean batch inertia: 11.316167, ewa inertia: 11.785017
Minibatch iteration 103/94000: mean batch inertia: 11.140793, ewa inertia: 11.771278
[MiniBatchKMeans] Reassigning 16 cluster centers.
Minibatch iteration 104/94000: mean batch inertia: 13.486616, ewa inertia: 11.807859
Minibatch iteration 105/94000: mean batch inertia: 11.905114, ewa inertia: 11.809933
Minibatch iteration 106/94000: mean batch inertia: 10.005490, ewa inertia: 11.771451
Minibatch iteration 107/94000: mean batch inertia: 13.634778, ewa inertia: 11.811189
Minibatch iteration 108/94000: mean batch inertia: 12.368961, ewa inertia: 11.823084
Minibatch iteration 109/94000: mean batch inertia: 11.177295, ewa inertia: 11.809312
Minibatch iteration 110/94000: mean batch inertia: 11.525227, ewa inertia: 11.803254
Minibatch iteration 111/94000: mean batch inertia: 10.429805, ewa inertia: 11.773963
Minibatch iteration 112/94000: mean batch inertia: 9.743016, ewa inertia: 11.730651
Minibatch iteration 113/94000: mean batch inertia: 11.055937, ewa inertia: 11.716262
Minibatch iteration 114/94000: mean batch inertia: 10.616200, ewa inertia: 11.692801
Minibatch iteration 115/94000: mean batch inertia: 10.019674, ewa inertia: 11.657120
Minibatch iteration 116/94000: mean batch inertia: 11.005973, ewa inertia: 11.643233
[MiniBatchKMeans] Reassigning 16 cluster centers.
Minibatch iteration 117/94000: mean batch inertia: 11.895338, ewa inertia: 11.648610
Minibatch iteration 118/94000: mean batch inertia: 11.529399, ewa inertia: 11.646068
Minibatch iteration 119/94000: mean batch inertia: 12.651384, ewa inertia: 11.667507
Minibatch iteration 120/94000: mean batch inertia: 11.366261, ewa inertia: 11.661083
Minibatch iteration 121/94000: mean batch inertia: 10.404779, ewa inertia: 11.634291
Minibatch iteration 122/94000: mean batch inertia: 10.710276, ewa inertia: 11.614585
Minibatch iteration 123/94000: mean batch inertia: 10.915980, ewa inertia: 11.599686
Minibatch iteration 124/94000: mean batch inertia: 12.176873, ewa inertia: 11.611995
Minibatch iteration 125/94000: mean batch inertia: 10.021942, ewa inertia: 11.578086
Minibatch iteration 126/94000: mean batch inertia: 10.749293, ewa inertia: 11.560411
Minibatch iteration 127/94000: mean batch inertia: 11.576580, ewa inertia: 11.560755
Minibatch iteration 128/94000: mean batch inertia: 11.011921, ewa inertia: 11.549051
Minibatch iteration 129/94000: mean batch inertia: 11.379705, ewa inertia: 11.545439
[MiniBatchKMeans] Reassigning 16 cluster centers.
Minibatch iteration 130/94000: mean batch inertia: 12.061746, ewa inertia: 11.556450
Minibatch iteration 131/94000: mean batch inertia: 10.795418, ewa inertia: 11.540220
Minibatch iteration 132/94000: mean batch inertia: 11.888791, ewa inertia: 11.547654
Minibatch iteration 133/94000: mean batch inertia: 10.551104, ewa inertia: 11.526401
Minibatch iteration 134/94000: mean batch inertia: 9.397302, ewa inertia: 11.480996
Minibatch iteration 135/94000: mean batch inertia: 11.842881, ewa inertia: 11.488713
Minibatch iteration 136/94000: mean batch inertia: 11.969118, ewa inertia: 11.498959
Minibatch iteration 137/94000: mean batch inertia: 10.461563, ewa inertia: 11.476835
Minibatch iteration 138/94000: mean batch inertia: 9.840502, ewa inertia: 11.441938
Minibatch iteration 139/94000: mean batch inertia: 11.524781, ewa inertia: 11.443705
Minibatch iteration 140/94000: mean batch inertia: 10.820325, ewa inertia: 11.430410
Minibatch iteration 141/94000: mean batch inertia: 9.029712, ewa inertia: 11.379213
Minibatch iteration 142/94000: mean batch inertia: 10.567205, ewa inertia: 11.361895
[MiniBatchKMeans] Reassigning 16 cluster centers.
Minibatch iteration 143/94000: mean batch inertia: 10.284371, ewa inertia: 11.338916
Minibatch iteration 144/94000: mean batch inertia: 11.661714, ewa inertia: 11.345800
Minibatch iteration 145/94000: mean batch inertia: 10.889306, ewa inertia: 11.336065
Minibatch iteration 146/94000: mean batch inertia: 11.356776, ewa inertia: 11.336506
Minibatch iteration 147/94000: mean batch inertia: 10.445622, ewa inertia: 11.317507
Minibatch iteration 148/94000: mean batch inertia: 11.672482, ewa inertia: 11.325078
Minibatch iteration 149/94000: mean batch inertia: 10.426704, ewa inertia: 11.305919
Minibatch iteration 150/94000: mean batch inertia: 11.009564, ewa inertia: 11.299598
Minibatch iteration 151/94000: mean batch inertia: 13.432962, ewa inertia: 11.345095
Minibatch iteration 152/94000: mean batch inertia: 13.179565, ewa inertia: 11.384217
Minibatch iteration 153/94000: mean batch inertia: 10.640913, ewa inertia: 11.368365
[MiniBatchKMeans] Reassigning 16 cluster centers.
Minibatch iteration 154/94000: mean batch inertia: 11.127728, ewa inertia: 11.363234
Minibatch iteration 155/94000: mean batch inertia: 10.193605, ewa inertia: 11.338290
Minibatch iteration 156/94000: mean batch inertia: 12.412322, ewa inertia: 11.361195
Minibatch iteration 157/94000: mean batch inertia: 11.072166, ewa inertia: 11.355031
Minibatch iteration 158/94000: mean batch inertia: 10.621327, ewa inertia: 11.339384
Minibatch iteration 159/94000: mean batch inertia: 10.359053, ewa inertia: 11.318477
Minibatch iteration 160/94000: mean batch inertia: 11.182665, ewa inertia: 11.315581
Converged (lack of improvement in inertia) at iteration 160/94000
Computing label assignment and total inertia
done! 1.64
k_means.fit(data) Done!
CPU times: user 1.92 s, sys: 864 ms, total: 2.78 s
Wall time: 1.64 s
|
Computer Vision/Single Shot Detector (SSD)/Pytorch/SSD Notes.ipynb
|
###Markdown
Object Detection with SSD Importing the libraries
###Code
import torch
import cv2
import imageio
# SSD model is taken from [https://github.com/amdegroot/ssd.pytorch] and adapted for torch 1.11.0
from data import BaseTransform, VOC_CLASSES as labelmap
from ssd import build_ssd
###Output
_____no_output_____
###Markdown
Detection Function
###Code
# Frame by frame detection
def detect(frame, net, transform):
height, width = frame.shape[:2]
frame_t = transform(frame)[0]
x = torch.from_numpy(frame_t).permute(2, 0, 1) #RBG to GRB with .permute()
x = x.unsqueeze(0) # Take the batch with it's gradients
with torch.no_grad():
y = net(x) # Feed the frame to Neural Network
detections = y.data # [batch, number of classes, number of occurence, (score, x0, y0, x1, y1)]
scale = torch.Tensor([width, height, width, height])
for i in range(detections.size(1)):
j = 0
while detections[0, i, j, 0] >= 0.3: # Score >= 0.3
pt = (detections[0, i, j, 1:] * scale).numpy()
cv2.rectangle(frame, (int(pt[0]), int(pt[1])), (int(pt[2]), int(pt[3])),
(153, 0, 0), 2)
cv2.putText(frame, labelmap[i - 1], (int(pt[0]), int(pt[1])),
cv2.FONT_HERSHEY_SIMPLEX, 2, (255, 255, 255), 2, cv2.LINE_AA)
j = j + 1
return frame
###Output
_____no_output_____
###Markdown
SSD Neural NetworkDownload pretrained SSD weights (ssd300_mAP_77.43_v2.pth) from [this link](https://s3.amazonaws.com/amdegroot-models/ssd300_mAP_77.43_v2.pth).
###Code
net = build_ssd('test')
net.load_state_dict(torch.load('ssd300_mAP_77.43_v2.pth',
map_location = lambda storage,
loc: storage))
transform = BaseTransform(net.size, (104/256.0, 117/256.0, 123/256.0))
###Output
_____no_output_____
###Markdown
Objection Detection on a Video
###Code
reader = imageio.get_reader('video.mp4')
fps = reader.get_meta_data()['fps']
writer = imageio.get_writer('output.mp4', fps = fps, macro_block_size = 1)
for i, frame in enumerate(reader):
frame = detect(frame, net.eval(), transform)
writer.append_data(frame)
writer.close()
###Output
C:\Users\ersoy\anaconda3\lib\site-packages\torch\nn\functional.py:780: UserWarning: Note that order of the arguments: ceil_mode and return_indices will changeto match the args list in nn.MaxPool2d in a future release.
warnings.warn("Note that order of the arguments: ceil_mode and return_indices will change"
|
docs/00-How_to_Evaluate_functions_in_sympy.ipynb
|
###Markdown
How to evaluate functions in sympy Sympy knows functions. After `from sympy import *` which is usually ok in notebooks, i.e. `sin(x)`, `cos(x)`, `exp(x)` or `ln(x)` are known functions.Sympy also knows undefined functions, like`f = Function('f')`or`g = Function('g')(x)`Undefined functions can be used in differerential equations for example. But sometimes it is useful to define a function from a given expression, like$$ f(x) = a\,x^2$$In this cases it is often recommended to use expressions:`a,x = symbols('a,x')expr = a*x**2`Now, it is possible, to simulate the function $f(x)=a\,x^2$ using this expression, i.e.:`expr.subs(x,2) to calculate f(2)expr.diff(x) to calculate f'(x)`If you realy want to have a function, then it is often recommended to use `def` or `lambda` for this.Let's try some examples:
###Code
from sympy import *
init_printing()
# some examples for the sin-function:
x = Symbol('x')
[sin(x), sin(0), sin(pi/2), sin(pi)]
# Undefined functions:
f = Function('f')
f
f(x).diff()
g = Function('g')(x)
g
g.diff()
###Output
_____no_output_____
###Markdown
Now lets try to use the function$$ f(x) = a\,x^2$$ Use expression to simulate the function:
###Code
# Define symbols
a,x = symbols('a,x')
# define expression
expr = a*x**2
# output expression
expr
# Evaluate f(2)
expr.subs(x,2)
# Determine the expression that belongs to f'(x)
expr_1 = expr.diff(x)
expr_1
# Evaluate f'(1)
expr_1.subs(x,1)
###Output
_____no_output_____
###Markdown
If we want to have a 'real' function, we could try to use `def`:
###Code
def f(x):
"""
define function
a: global variable
x: bound to the function
"""
return a*x**2
# Evaluate f(2)
f(2)
###Output
_____no_output_____
###Markdown
This seems to be ok, but it does not behave the same way as our previously defined expression does:If, for example, at some point in the notebook, `a` is redefined, we get:
###Code
a = 5
# the global variable a and the symbol a in expr are different:
expr.subs(x,2)
# But here, a is the global variable of value 5
f(2)
###Output
_____no_output_____
###Markdown
This happens, because the function `f` uses the global variable `a`, which was redefined to be `a=5`.But `expr` is defined with a Symbol `a` and this Symbol is not altered by changeing the global variable `a`.If this is an unexpected behaviour, we could try to define the symbol `a` inside the function `f`:
###Code
def f(x):
"""
define function
a: local variable, always a Symbol
x: bound to the function
"""
a = Symbol('a')
return a*x**2
a
f(2)
###Output
_____no_output_____
###Markdown
How to define the derivative of `f`? The most straightforward way seems to be:
###Code
def f_1(x):
"""
define f'(x)
for that, differentiate the expression f(x) wrt x
and return the result.
"""
return f(x).diff(x)
f_1(x)
###Output
_____no_output_____
###Markdown
This seems to work, but it fails:
###Code
# This throws an ValueError
# The reason is, f_1 tries to diff f(2) wrt 2
f_1(2)
###Output
_____no_output_____
###Markdown
This means, we must be careful to define the derivative of the function `f`:
###Code
def f_1(x):
"""
define f'(x)
for that, define a local variable v as Symbol
and determine f'(v) wrt v.
Substitute the symbol v by the bound variable x
and return the result.
"""
v = Symbol('v')
return f(v).diff(v).subs(v,x)
f_1(x)
f_1(2)
###Output
_____no_output_____
###Markdown
Now assume, the value of the free symbol `a` has to be determined to satisfy \begin{align*}f(x) &= a\,x^2 \\f(2) &= 8\end{align*} Using `expr` we might write
###Code
eq = Eq(expr.subs(x,2),8)
eq
###Output
_____no_output_____
###Markdown
This gives the solution
###Code
sol = solve(eq)
sol
###Output
_____no_output_____
###Markdown
With this, we can redefine `expr` as
###Code
# We need to redefine a as Symbol
a = Symbol('a')
expr = expr.subs(a,sol[0])
expr
# Test:
expr.subs(x,2)
###Output
_____no_output_____
###Markdown
Using `f(x)` this would read:
###Code
eq = Eq(f(2),8)
eq
sol = solve(eq)
sol
###Output
_____no_output_____
###Markdown
But now, $f$ needs to be rewritten in order to hold $f(x) = 2\,x^2$.
###Code
def f(x):
return 2*x**2
f(2)
###Output
_____no_output_____
|
Model backlog/Train/6-commonlit-roberta-base-seq-256-no-sampling.ipynb
|
###Markdown
Dependencies
###Code
import random, os, warnings, math
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
import tensorflow as tf
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras import optimizers, losses, metrics, Model
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, LearningRateScheduler
from transformers import TFAutoModelForSequenceClassification, TFAutoModel, AutoTokenizer
def seed_everything(seed=0):
random.seed(seed)
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
seed = 0
seed_everything(seed)
sns.set(style='whitegrid')
warnings.filterwarnings('ignore')
pd.set_option('display.max_colwidth', 150)
###Output
_____no_output_____
###Markdown
Hardware configuration
###Code
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print(f'Running on TPU {tpu.master()}')
except ValueError:
tpu = None
if tpu:
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
else:
strategy = tf.distribute.get_strategy()
AUTO = tf.data.experimental.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
###Output
Running on TPU grpc://10.0.0.2:8470
REPLICAS: 8
###Markdown
Load data
###Code
train_filepath = '/kaggle/input/commonlitreadabilityprize/train.csv'
train = pd.read_csv(train_filepath)
print(f'Train samples: {len(train)}')
display(train.head())
# removing unused columns
train.drop(['url_legal', 'license'], axis=1, inplace=True)
###Output
Train samples: 2834
###Markdown
Model parameters
###Code
BATCH_SIZE = 8 * REPLICAS
LEARNING_RATE = 1e-5 * REPLICAS
EPOCHS = 35
ES_PATIENCE = 10
PATIENCE = 2
N_FOLDS = 5
N_USED_FOLDS = 1
SEQ_LEN = 256
BASE_MODEL = '/kaggle/input/huggingface-roberta/roberta-base/'
###Output
_____no_output_____
###Markdown
Auxiliary functions
###Code
# Datasets utility functions
def custom_standardization(text):
text = text.lower() # if encoder is uncased
text = text.strip()
return text
def sample_target(features, target):
mean, stddev = target
sampled_target = tf.random.normal([], mean=tf.cast(mean, dtype=tf.float32),
stddev=tf.cast(stddev, dtype=tf.float32), dtype=tf.float32)
return (features, sampled_target)
def get_dataset(pandas_df, tokenizer, labeled=True, ordered=False, repeated=False,
is_sampled=False, batch_size=32, seq_len=128):
"""
Return a Tensorflow dataset ready for training or inference.
"""
text = [custom_standardization(text) for text in pandas_df['excerpt']]
# Tokenize inputs
tokenized_inputs = tokenizer(text, max_length=seq_len, truncation=True,
padding='max_length', return_tensors='tf')
if labeled:
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': tokenized_inputs['input_ids'],
'attention_mask': tokenized_inputs['attention_mask']},
(pandas_df['target'], pandas_df['standard_error'])))
if is_sampled:
dataset = dataset.map(sample_target, num_parallel_calls=tf.data.AUTOTUNE)
else:
dataset = tf.data.Dataset.from_tensor_slices({'input_ids': tokenized_inputs['input_ids'],
'attention_mask': tokenized_inputs['attention_mask']})
if repeated:
dataset = dataset.repeat()
if not ordered:
dataset = dataset.shuffle(1024)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(tf.data.AUTOTUNE)
return dataset
def plot_metrics(history):
metric_list = list(history.keys())
size = len(metric_list)//2
fig, axes = plt.subplots(size, 1, sharex='col', figsize=(20, size * 5))
axes = axes.flatten()
for index in range(len(metric_list)//2):
metric_name = metric_list[index]
val_metric_name = metric_list[index+size]
axes[index].plot(history[metric_name], label='Train %s' % metric_name)
axes[index].plot(history[val_metric_name], label='Validation %s' % metric_name)
axes[index].legend(loc='best', fontsize=16)
axes[index].set_title(metric_name)
plt.xlabel('Epochs', fontsize=16)
sns.despine()
plt.show()
###Output
_____no_output_____
###Markdown
Model
###Code
def model_fn(encoder, seq_len=256):
input_ids = L.Input(shape=(seq_len,), dtype=tf.int32, name='input_ids')
input_attention_mask = L.Input(shape=(seq_len,), dtype=tf.int32, name='attention_mask')
outputs = encoder({'input_ids': input_ids,
'attention_mask': input_attention_mask})
last_hidden_state = outputs['last_hidden_state']
x = L.GlobalAveragePooling1D()(last_hidden_state)
output = L.Dense(1, name='output')(x)
model = Model(inputs=[input_ids, input_attention_mask], outputs=output)
optimizer = optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer,
loss=losses.MeanSquaredError(),
metrics=[metrics.RootMeanSquaredError()])
return model
with strategy.scope():
encoder = TFAutoModel.from_pretrained(BASE_MODEL)
model = model_fn(encoder, SEQ_LEN)
model.summary()
###Output
Some layers from the model checkpoint at /kaggle/input/huggingface-roberta/roberta-base/ were not used when initializing TFRobertaModel: ['lm_head']
- This IS expected if you are initializing TFRobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFRobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the layers of TFRobertaModel were initialized from the model checkpoint at /kaggle/input/huggingface-roberta/roberta-base/.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFRobertaModel for predictions without further training.
###Markdown
Training
###Code
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
skf = KFold(n_splits=N_FOLDS, shuffle=True, random_state=seed)
oof_pred = []; oof_labels = []; history_list = []
for fold,(idxT, idxV) in enumerate(skf.split(train)):
if fold >= N_USED_FOLDS:
break
if tpu: tf.tpu.experimental.initialize_tpu_system(tpu)
print(f'\nFOLD: {fold+1}')
print(f'TRAIN: {len(idxT)} VALID: {len(idxV)}')
# Model
K.clear_session()
with strategy.scope():
encoder = TFAutoModel.from_pretrained(BASE_MODEL)
model = model_fn(encoder, SEQ_LEN)
model_path = f'model_{fold}.h5'
es = EarlyStopping(monitor='val_root_mean_squared_error', mode='min',
patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
checkpoint = ModelCheckpoint(model_path, monitor='val_root_mean_squared_error', mode='min',
save_best_only=True, save_weights_only=True)
# Train
history = model.fit(x=get_dataset(train.loc[idxT], tokenizer, repeated=True, is_sampled=False,
batch_size=BATCH_SIZE, seq_len=SEQ_LEN),
validation_data=get_dataset(train.loc[idxV], tokenizer, ordered=True,
batch_size=BATCH_SIZE, seq_len=SEQ_LEN),
steps_per_epoch=50,
callbacks=[es, checkpoint],
epochs=EPOCHS,
verbose=2).history
history_list.append(history)
# Save last model weights
model.load_weights(model_path)
# Results
print(f"#### FOLD {fold+1} OOF RMSE = {np.min(history['val_root_mean_squared_error']):.4f}")
# OOF predictions
valid_ds = get_dataset(train.loc[idxV], tokenizer, ordered=True, batch_size=BATCH_SIZE, seq_len=SEQ_LEN)
oof_labels.append([target[0].numpy() for sample, target in iter(valid_ds.unbatch())])
x_oof = valid_ds.map(lambda sample, target: sample)
oof_pred.append(model.predict(x_oof))
###Output
FOLD: 1
TRAIN: 2267 VALID: 567
###Markdown
Model loss and metrics graph
###Code
for fold, history in enumerate(history_list):
print(f'\nFOLD: {fold+1}')
plot_metrics(history)
###Output
FOLD: 1
###Markdown
Model evaluationWe are evaluating the model on the `OOF` predictions, it stands for `Out Of Fold`, since we are training using `K-Fold` our model will see all the data, and the correct way to evaluate each fold is by looking at the predictions that are not from that fold. OOF metrics
###Code
y_true = np.concatenate(oof_labels)
y_preds = np.concatenate(oof_pred)
for fold, history in enumerate(history_list):
print(f"FOLD {fold+1} RMSE: {np.min(history['val_root_mean_squared_error']):.4f}")
print(f'OOF RMSE: {mean_squared_error(y_true, y_preds, squared=False):.4f}')
###Output
FOLD 1 RMSE: 0.5946
OOF RMSE: 0.5946
###Markdown
**Error analysis**, label x prediction distributionHere we can compare the distribution from the labels and the predicted values, in a perfect scenario they should align.
###Code
preds_df = pd.DataFrame({'Label': y_true, 'Prediction': y_preds[:,0]})
fig, ax = plt.subplots(1, 1, figsize=(20, 6))
sns.distplot(preds_df['Label'], ax=ax, label='Label')
sns.distplot(preds_df['Prediction'], ax=ax, label='Prediction')
ax.legend()
plt.show()
sns.jointplot(data=preds_df, x='Label', y='Prediction', kind='reg', height=10)
plt.show()
###Output
_____no_output_____
|
notebooks/03y1_Multi-Mission-Intro.ipynb
|
###Markdown
Multi-Mission MAG> Abstract: An introduction to "the greater Swarm": calibrated platform magnetometer data from other missions. We begin with Cryosat-2, GRACE, and GRACE-FO.See also:- https://nbviewer.jupyter.org/github/pacesm/jupyter_notebooks/blob/master/MAG/MAG_multi-mission_CHAOS_residuals_demo.ipynb
###Code
%load_ext watermark
%watermark -i -v -p viresclient,pandas,xarray,matplotlib,cartopy
from viresclient import SwarmRequest
import datetime as dt
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cfeature
from tqdm.notebook import tqdm
###Output
_____no_output_____
###Markdown
Product information Platform magnetometer data from other missions have been carefully recalibrated so that they are accurate and suitable for usage in geomagnetic field modelling. The data currently available from VirES are as follows:- CryoSat-2: Olsen, N., Albini, G., Bouffard, J. et al. Magnetic observations from CryoSat-2: calibration and processing of satellite platform magnetometer data. Earth Planets Space 72, 48 (2020). VirES collection names: `"CS_OPER_MAG"`- GRACE (x2): Olsen, N. Magnetometer data from the GRACE satellite duo. Earth Planets Space 73, 62 (2021). VirES collection names: `"GRACE_A_MAG"`, `"GRACE_B_MAG"` - GRACE-FO (x2): Stolle, C., Michaelis, I., Xiong, C. et al. Observing Earth’s magnetic environment with the GRACE-FO mission. Earth Planets Space 73, 51 (2021). VirES collection names: `"GF1_OPER_FGM_ACAL_CORR"`, `"GF2_OPER_FGM_ACAL_CORR"` The variables available from each collection are:
###Code
request = SwarmRequest()
for collection in ("CS_OPER_MAG", "GRACE_A_MAG", "GF1_OPER_FGM_ACAL_CORR"):
print(f"{collection}:\n{request.available_measurements(collection)}\n")
###Output
_____no_output_____
###Markdown
Where additional `B_NEC` variables are specified (`B_NEC1`, `B_NEC2`, `B_NEC3`), these correspond to measurements from separate magnetometers on-board the spacecraft. See the scientific publications for details. Magnetic model evaluation will also work with those variables.The temporal availabilities of data are:
###Code
for collection in (
"CS_OPER_MAG",
"GRACE_A_MAG", "GRACE_B_MAG",
"GF1_OPER_FGM_ACAL_CORR", "GF2_OPER_FGM_ACAL_CORR"
):
df = request.available_times(collection)
start = df["starttime"].iloc[0]
end = df["endtime"].iloc[-1]
print(f"{collection}:\nAvailability: {start} --- {end}\n")
###Output
_____no_output_____
###Markdown
Access works just like Swarm MAG productsWe can specify which collection to fetch, and which models to evaluate at the same time:
###Code
request = SwarmRequest()
request.set_collection("GF1_OPER_FGM_ACAL_CORR")
request.set_products(["B_NEC"], models=["IGRF"])
data = request.get_between("2018-06-01", "2018-06-02")
ds = data.as_xarray()
ds
# Append the residual, B - IGRF
ds["B_NEC_res_IGRF"] = ds["B_NEC"] - ds["B_NEC_IGRF"]
# Plot (B) and (B - IGRF) to compare
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(10, 5), sharex=True)
ds["B_NEC"].plot.line(x="Timestamp", ax=axes[0])
ds["B_NEC_res_IGRF"].plot.line(x="Timestamp", ax=axes[1]);
###Output
_____no_output_____
###Markdown
Data from multiple spacecraftHere is an example of fetching and visualising data from multiple spacecraft. We select a day where we can get data from Swarm, CryoSat, and GRACE-FO.
###Code
START = dt.datetime(2018, 6, 1)
END = dt.datetime(2018, 6, 2)
# Mappings to identify spacecraft and collection names
# Let's disable Swarm Charlie & GRACE-FO 2 for now
# as they are in similar places as Swarm Alpha and GRACE-FO 1
spacecraft_to_collections = {
"Swarm Alpha": "SW_OPER_MAGA_LR_1B",
"Swarm Bravo": "SW_OPER_MAGB_LR_1B",
# "Swarm Charlie": "SW_OPER_MAGC_LR_1B",
"CryoSat-2": "CS_OPER_MAG",
"GRACE-FO 1": "GF1_OPER_FGM_ACAL_CORR",
# "GRACE-FO 2": "GF2_OPER_FGM_ACAL_CORR"
}
collections_to_spacecraft = {v: k for k, v in spacecraft_to_collections.items()}
def fetch_sc(sc_collection, start_time=START, end_time=END, **kwargs):
"""Fetch data from a specific spacecraft"""
request = SwarmRequest()
request.set_collection(sc_collection)
request.set_products(["B_NEC"])
data = request.get_between(start_time, end_time, **kwargs)
ds = data.as_xarray()
# Rename the Spacecraft variable to use the mission name too
ds.Spacecraft[:] = collections_to_spacecraft[sc_collection]
return ds
ds_set = {}
for sc in tqdm(spacecraft_to_collections.keys()):
collection = spacecraft_to_collections[sc]
ds_set[sc] = fetch_sc(collection, asynchronous=False, show_progress=False)
###Output
_____no_output_____
###Markdown
Data are now stored within datasets within a dictionary:
###Code
ds_set["Swarm Alpha"]
###Output
_____no_output_____
###Markdown
A quick inspection of the data:
###Code
ds_set["Swarm Alpha"].plot.scatter(x="Longitude", y="Latitude", hue="B_NEC", s=0.1);
fig, axes = plt.subplots(
nrows=2, figsize=(15,15),
subplot_kw={"projection": ccrs.PlateCarree()}
)
for ax in axes:
ax.add_feature(cfeature.COASTLINE, edgecolor='silver')
for sc in ("Swarm Alpha", "Swarm Bravo", "CryoSat-2", "GRACE-FO 1"):
# Extract dataset and plot contents
_ds = ds_set[sc]
lon, lat = _ds["Longitude"], _ds["Latitude"]
B_C = _ds["B_NEC"].sel(NEC="C").values
# Plot positions coloured by spacecraft
axes[0].scatter(x=lon, y=lat, s=0.1, label=sc)
norm = plt.Normalize(vmin=-60000, vmax=60000)
cmap = "viridis"
# Plot
axes[1].scatter(x=lon, y=lat, c=B_C, s=0.1, norm=norm, cmap=cmap)
fig.colorbar(plt.cm.ScalarMappable(norm=norm, cmap=cmap), ax=axes[1], label=r"$B_C$ [nT]");
axes[0].set_title("Orbits from each spacecraft")
axes[1].set_title("Vertical component magnetic measurements");
###Output
_____no_output_____
|
notebooks/features/responsible_ai/DataBalanceAnalysis - Adult Census Income.ipynb
|
###Markdown
Data Balance Analysis using the Adult Census Income datasetIn this example, we will conduct Data Balance Analysis (which consists on running three groups of measures) on the Adult Census Income dataset to determine how well features and feature values are represented in the dataset.This dataset can be used to predict whether annual income exceeds $50,000/year or not based on demographic data from the 1994 U.S. Census. The dataset we're reading contains 32,561 rows and 14 columns/features.[More info on the dataset here](https://archive.ics.uci.edu/ml/datasets/Adult)---Data Balance Analysis is relevant for overall understanding of datasets, but it becomes essential when thinking about building Machine Learning services out of such datasets. Having a well balanced data representation is critical when developing models in a responsible way, specially in terms of fairness.It is unfortunately all too easy to build an ML model that produces biased results for subsets of an overall population, by training or testing the model on biased ground truth data. There are multiple case studies of biased models assisting in granting loans, healthcare, recruitment opportunities and many other decision making tasks. In most of these examples, the data from which these models are trained was the common issue. These findings emphasize how important it is for model creators and auditors to analyze data balance: to measure training data across sub-populations and ensure the data has good coverage and a balanced representation of labels across sensitive categories and category combinations, and to check that test data is representative of the target population.In summary, Data Balance Analysis, used as a step for building ML models has the following benefits:* **Reduces risks for unbalanced models (facilitate service fairness) and reduces costs of ML building** by identifying early on data representation gaps that prompt data scientists to seek mitigation steps (collect more data, follow a specific sampling mechanism, create synthetic data, etc.) before proceeding to train their models.* **Enables easy e2e debugging of ML systems** in combination with [Fairlearn](https://fairlearn.org/) by providing a clear view if for an unbalanced model the issue is tied to the data or the model.---Note: If you are running this notebook in a Spark environment such as Azure Synapse or Databricks, then you can easily visualize the imbalance measures using the built-in plotting features.Python dependencies:```textmatplotlib==3.2.2numpy==1.19.2```
###Code
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pyspark.sql.functions as F
import os
if os.environ.get("AZURE_SERVICE", None) == "Microsoft.ProjectArcadia":
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
from notebookutils.visualization import display
df = spark.read.parquet(
"wasbs://[email protected]/AdultCensusIncome.parquet"
)
display(df)
# Convert the "income" column from {<=50K, >50K} to {0, 1} to represent our binary classification label column
label_col = "income"
df = df.withColumn(
label_col, F.when(F.col(label_col).contains("<=50K"), F.lit(0)).otherwise(F.lit(1))
)
###Output
_____no_output_____
###Markdown
Perform preliminary analysis on columns of interest
###Code
display(df.groupBy("race").count())
display(df.groupBy("sex").count())
# Choose columns/features to do data balance analysis on
cols_of_interest = ["race", "sex"]
display(df.select(cols_of_interest + [label_col]))
###Output
_____no_output_____
###Markdown
Calculate Feature Balance MeasuresFeature Balance Measures allow us to see whether each combination of sensitive feature is receiving the positive outcome (true prediction) at equal rates.In this context, we define a feature balance measure, also referred to as the parity, for label y as the absolute difference between the association metrics of two different sensitive classes $[x_A, x_B]$, with respect to the association metric $A(x_i, y)$. That is:$$parity(y \vert x_A, x_B, A(\cdot)) \coloneqq A(x_A, y) - A(x_B, y)$$Using the dataset, we can see if the various sexes and races are receiving >50k income at equal or unequal rates.Note: Many of these metrics were influenced by this paper [Measuring Model Biases in the Absence of Ground Truth](https://arxiv.org/abs/2103.03417).| Association Metric | Family | Description | Interpretation/Formula | Reference ||----------------------------------------------------|-----------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------|| Demographic Parity | Fairness | Proportion of each segment of a protected class (e.g. gender) should receive the positive outcome at equal rates. | As close to 0 means better parity. $DP = P(Y \vert A = "Male") - P(Y \vert A = "Female")$. | [Link](https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29) || Pointwise Mutual Information (PMI), normalized PMI | Entropy | The PMI of a pair of feature values (ex: Gender=Male and Gender=Female) quantifies the discrepancy between the probability of their coincidence given their joint distribution and their individual distributions (assuming independence). | Range (normalized) $[-1, 1]$. -1 for no co-occurences. 0 for co-occurences at random. 1 for complete co-occurences. | [Link](https://en.wikipedia.org/wiki/Pointwise_mutual_information) || Sorensen-Dice Coefficient (SDC) | Intersection-over-Union | Used to gauge the similarity of two samples. Related to F1 score. | Equals twice the number of elements common to both sets divided by the sum of the number of elements in each set. | [Link](https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient) || Jaccard Index | Intersection-over-Union | Similar to SDC, guages the similarity and diversity of sample sets. | Equals the size of the intersection divided by the size of the union of the sample sets. | [Link](https://en.wikipedia.org/wiki/Jaccard_index) || Kendall Rank Correlation | Correlation and Statistical Tests | Used to measure the ordinal association between two measured quantities. | High when observations have a similar rank and low when observations have a dissimilar rank between the two variables. | [Link](https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient) || Log-Likelihood Ratio | Correlation and Statistical Tests | Calculates the degree to which data supports one variable versus another. Log of the likelihood ratio, which gives the probability of correctly predicting the label in ratio to probability of incorrectly predicting label. | If likelihoods are similar, it should be close to 0. | [Link](https://en.wikipedia.org/wiki/Likelihood_functionLikelihood_ratio) || t-test | Correlation and Statistical Tests | Used to compare the means of two groups (pairwise). | Value looked up in t-Distribution tell if statistically significant or not. | [Link](https://en.wikipedia.org/wiki/Student's_t-test) |
###Code
from synapse.ml.exploratory import FeatureBalanceMeasure
feature_balance_measures = (
FeatureBalanceMeasure()
.setSensitiveCols(cols_of_interest)
.setLabelCol(label_col)
.setVerbose(True)
.transform(df)
)
# Sort by Demographic Parity descending for all features
display(feature_balance_measures.sort(F.abs("FeatureBalanceMeasure.dp").desc()))
# Drill down to feature == "sex"
display(
feature_balance_measures.filter(F.col("FeatureName") == "sex").sort(
F.abs("FeatureBalanceMeasure.dp").desc()
)
)
# Drill down to feature == "race"
display(
feature_balance_measures.filter(F.col("FeatureName") == "race").sort(
F.abs("FeatureBalanceMeasure.dp").desc()
)
)
###Output
_____no_output_____
###Markdown
Visualize Feature Balance Measures
###Code
races = [row["race"] for row in df.groupBy("race").count().select("race").collect()]
dp_rows = (
feature_balance_measures.filter(F.col("FeatureName") == "race")
.select("ClassA", "ClassB", "FeatureBalanceMeasure.dp")
.collect()
)
race_dp_values = [(row["ClassA"], row["ClassB"], row["dp"]) for row in dp_rows]
race_dp_array = np.zeros((len(races), len(races)))
for class_a, class_b, dp_value in race_dp_values:
i, j = races.index(class_a), races.index(class_b)
dp_value = round(dp_value, 2)
race_dp_array[i, j] = dp_value
race_dp_array[j, i] = -1 * dp_value
colormap = "RdBu"
dp_min, dp_max = -1.0, 1.0
fig, ax = plt.subplots()
im = ax.imshow(race_dp_array, vmin=dp_min, vmax=dp_max, cmap=colormap)
cbar = ax.figure.colorbar(im, ax=ax)
cbar.ax.set_ylabel("Demographic Parity", rotation=-90, va="bottom")
ax.set_xticks(np.arange(len(races)))
ax.set_yticks(np.arange(len(races)))
ax.set_xticklabels(races)
ax.set_yticklabels(races)
plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")
for i in range(len(races)):
for j in range(len(races)):
text = ax.text(j, i, race_dp_array[i, j], ha="center", va="center", color="k")
ax.set_title("Demographic Parity of Races in Adult Dataset")
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
 Interpret Feature Balance MeasuresDemographic Parity:* When it is positive, it means that ClassA sees the positive outcome more than ClassB.* When it is negative, it means that ClassB sees the positive outcome more than ClassA.---From the results, we can tell the following:For Sex:* DP(Male, Female) = 0.1963 shows "Male" observations are associated with ">50k" income label more often than "Female" observations.For Race:* DP(Other, Asian-Pac-Islander) = -0.1734 shows "Other" observations are associated with ">50k" income label less than "Asian-Pac-Islander" observations.* DP(White, Other) = 0.1636 shows "White" observations are associated with ">50k" income label more often than "Other" observations.* DP(Asian-Pac-Islander, Amer-Indian-Eskimo) = 0.1494 shows "Asian-Pac-Islander" observations are associated with ">50k" income label more often than "Amer-Indian-Eskimo" observations.Again, you can take mitigation steps to upsample/downsample your data to be less biased towards certain features and feature values.Built-in mitigation steps are coming soon. Calculate Distribution Balance MeasuresDistribution Balance Measures allow us to compare our data with a reference distribution (i.e. uniform distribution). They are calculated per sensitive column and don't use the label column.For example, let's assume we have a dataset with 9 rows and a Gender column, and we observe that:* "Male" appears 4 times* "Female" appears 3 times* "Other" appears 2 timesAssuming the uniform distribution:$$ReferenceCount \coloneqq \frac{numRows}{numFeatureValues}$$$$ReferenceProbability \coloneqq \frac{1}{numFeatureValues}$$| Feature Value | Observed Count | Reference Count | Observed Probability | Reference Probability ||---------------|----------------|-----------------|----------------------|-----------------------|| Male | 4 | 9/3 = 3 | 4/9 = 0.44 | 3/9 = 0.33 || Female | 3 | 9/3 = 3 | 3/9 = 0.33 | 3/9 = 0.33 || Other | 2 | 9/3 = 3 | 2/9 = 0.22 | 3/9 = 0.33 |We can use distance measures to find out how far our observed and reference distributions of these feature values are. Some of these distance measures include:| Measure | Description | Interpretation | Reference ||--------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|| KL Divergence | Measure of how one probability distribution is different from a second, reference probability distribution. Measure of the information gained when one revises one's beliefs from the prior probability distribution Q to the posterior probability distribution P. In other words, it is the amount of information lost when Q is used to approximate P. | Non-negative. 0 means P = Q. | [Link](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) || JS Distance | Measuring the similarity between two probability distributions. Symmetrized and smoothed version of the Kullback–Leibler (KL) divergence. Square root of JS Divergence. | Range [0, 1]. 0 means perfectly same to balanced distribution. | [Link](https://en.wikipedia.org/wiki/Jensen%E2%80%93Shannon_divergence) || Wasserstein Distance | This distance is also known as the earth mover’s distance, since it can be seen as the minimum amount of “work” required to transform u into v, where “work” is measured as the amount of distribution weight that must be moved, multiplied by the distance it has to be moved. | Non-negative. 0 means P = Q. | [Link](https://en.wikipedia.org/wiki/Wasserstein_metric) || Infinity Norm Distance | Distance between two vectors is the greatest of their differences along any coordinate dimension. Also called Chebyshev distance or chessboard distance. | Non-negative. 0 means same distribution. | [Link](https://en.wikipedia.org/wiki/Chebyshev_distance) || Total Variation Distance | It is equal to half the L1 (Manhattan) distance between the two distributions. Take the difference between the two proportions in each category, add up the absolute values of all the differences, and then divide the sum by 2. | Non-negative. 0 means same distribution. | [Link](https://en.wikipedia.org/wiki/Total_variation_distance_of_probability_measures) || Chi-Squared Test | The chi-square test tests the null hypothesis that the categorical data has the given frequencies given expected frequencies in each category. | p-value gives evidence against null-hypothesis that difference in observed and expected frequencies is by random chance. | [Link](https://en.wikipedia.org/wiki/Chi-squared_test) |
###Code
from synapse.ml.exploratory import DistributionBalanceMeasure
distribution_balance_measures = (
DistributionBalanceMeasure().setSensitiveCols(cols_of_interest).transform(df)
)
# Sort by JS Distance descending
display(
distribution_balance_measures.sort(
F.abs("DistributionBalanceMeasure.js_dist").desc()
)
)
###Output
_____no_output_____
###Markdown
Visualize Distribution Balance Measures
###Code
distribution_rows = distribution_balance_measures.collect()
race_row = [row for row in distribution_rows if row["FeatureName"] == "race"][0][
"DistributionBalanceMeasure"
]
sex_row = [row for row in distribution_rows if row["FeatureName"] == "sex"][0][
"DistributionBalanceMeasure"
]
measures_of_interest = [
"kl_divergence",
"js_dist",
"inf_norm_dist",
"total_variation_dist",
"wasserstein_dist",
]
race_measures = [round(race_row[measure], 4) for measure in measures_of_interest]
sex_measures = [round(sex_row[measure], 4) for measure in measures_of_interest]
x = np.arange(len(measures_of_interest))
width = 0.35
fig, ax = plt.subplots()
rects1 = ax.bar(x - width / 2, race_measures, width, label="Race")
rects2 = ax.bar(x + width / 2, sex_measures, width, label="Sex")
ax.set_xlabel("Measure")
ax.set_ylabel("Value")
ax.set_title("Distribution Balance Measures of Sex and Race in Adult Dataset")
ax.set_xticks(x)
ax.set_xticklabels(measures_of_interest)
ax.legend()
plt.setp(ax.get_xticklabels(), rotation=20, ha="right", rotation_mode="default")
def autolabel(rects):
for rect in rects:
height = rect.get_height()
ax.annotate(
"{}".format(height),
xy=(rect.get_x() + rect.get_width() / 2, height),
xytext=(0, 1), # 1 point vertical offset
textcoords="offset points",
ha="center",
va="bottom",
)
autolabel(rects1)
autolabel(rects2)
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
 Interpret Distribution Balance MeasuresRace has a JS Distance of 0.5104 while Sex has a JS Distance of 0.1217.Knowing that JS Distance is between [0, 1] where 0 means perfectly balanced distribution, we can tell that:* There is a larger disparity between various races than various sexes in our dataset.* Race is nowhere close to a perfectly balanced distribution (i.e. some races are seen ALOT more than others in our dataset).* Sex is fairly close to a perfectly balanced distribution. Calculate Aggregate Balance MeasuresAggregate Balance Measures allow us to obtain a higher notion of inequality. They are calculated on the global set of sensitive columns and don't use the label column.These measures look at distribution of records across all combinations of sensitive columns. For example, if Sex and Race are sensitive columns, it shall try to quantify imbalance across all combinations - (Male, Black), (Female, White), (Male, Asian-Pac-Islander), etc.| Measure | Description | Interpretation | Reference ||----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------|| Atkinson Index | It presents the percentage of total income that a given society would have to forego in order to have more equal shares of income between its citizens. This measure depends on the degree of society aversion to inequality (a theoretical parameter decided by the researcher), where a higher value entails greater social utility or willingness by individuals to accept smaller incomes in exchange for a more equal distribution. An important feature of the Atkinson index is that it can be decomposed into within-group and between-group inequality. | Range $[0, 1]$. 0 if perfect equality. 1 means maximum inequality. In our case, it is the proportion of records for a sensitive columns’ combination. | [Link](https://en.wikipedia.org/wiki/Atkinson_index) || Theil T Index | GE(1) = Theil's T and is more sensitive to differences at the top of the distribution. The Theil index is a statistic used to measure economic inequality. The Theil index measures an entropic "distance" the population is away from the "ideal" egalitarian state of everyone having the same income. | If everyone has the same income, then T_T equals 0. If one person has all the income, then T_T gives the result $ln(N)$. 0 means equal income and larger values mean higher level of disproportion. | [Link](https://en.wikipedia.org/wiki/Theil_index) || Theil L Index | GE(0) = Theil's L and is more sensitive to differences at the lower end of the distribution. Logarithm of (mean income)/(income i), over all the incomes included in the summation. It is also referred to as the mean log deviation measure. Because a transfer from a larger income to a smaller one will change the smaller income's ratio more than it changes the larger income's ratio, the transfer-principle is satisfied by this index. | Same interpretation as Theil T Index. | [Link](https://en.wikipedia.org/wiki/Theil_index) |
###Code
from synapse.ml.exploratory import AggregateBalanceMeasure
aggregate_balance_measures = (
AggregateBalanceMeasure().setSensitiveCols(cols_of_interest).transform(df)
)
display(aggregate_balance_measures)
###Output
_____no_output_____
###Markdown
Data Balance Analysis using the Adult Census Income datasetIn this example, we will conduct Data Balance Analysis (which consists on running three groups of measures) on the Adult Census Income dataset to determine how well features and feature values are represented in the dataset.This dataset can be used to predict whether annual income exceeds $50,000/year or not based on demographic data from the 1994 U.S. Census. The dataset we're reading contains 32,561 rows and 14 columns/features.[More info on the dataset here](https://archive.ics.uci.edu/ml/datasets/Adult)---Data Balance Analysis is relevant for overall understanding of datasets, but it becomes essential when thinking about building Machine Learning services out of such datasets. Having a well balanced data representation is critical when developing models in a responsible way, specially in terms of fairness.It is unfortunately all too easy to build an ML model that produces biased results for subsets of an overall population, by training or testing the model on biased ground truth data. There are multiple case studies of biased models assisting in granting loans, healthcare, recruitment opportunities and many other decision making tasks. In most of these examples, the data from which these models are trained was the common issue. These findings emphasize how important it is for model creators and auditors to analyze data balance: to measure training data across sub-populations and ensure the data has good coverage and a balanced representation of labels across sensitive categories and category combinations, and to check that test data is representative of the target population.In summary, Data Balance Analysis, used as a step for building ML models has the following benefits:* **Reduces risks for unbalanced models (facilitate service fairness) and reduces costs of ML building** by identifying early on data representation gaps that prompt data scientists to seek mitigation steps (collect more data, follow a specific sampling mechanism, create synthetic data, etc.) before proceeding to train their models.* **Enables easy e2e debugging of ML systems** in combination with [Fairlearn](https://fairlearn.org/) by providing a clear view if for an unbalanced model the issue is tied to the data or the model.---Note: If you are running this notebook in a Spark environment such as Azure Synapse or Databricks, then you can easily visualize the imbalance measures using the built-in plotting features.Python dependencies:```textmatplotlib==3.2.2numpy==1.19.2```
###Code
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pyspark.sql.functions as F
df = spark.read.parquet("wasbs://[email protected]/AdultCensusIncome.parquet")
display(df)
# Convert the "income" column from {<=50K, >50K} to {0, 1} to represent our binary classification label column
label_col = "income"
df = df.withColumn(label_col, F.when(F.col(label_col).contains("<=50K"), F.lit(0)).otherwise(F.lit(1)))
###Output
_____no_output_____
###Markdown
Perform preliminary analysis on columns of interest
###Code
display(df.groupBy("race").count())
display(df.groupBy("sex").count())
# Choose columns/features to do data balance analysis on
cols_of_interest = ["race", "sex"]
display(df.select(cols_of_interest + [label_col]))
###Output
_____no_output_____
###Markdown
Calculate Feature Balance MeasuresFeature Balance Measures allow us to see whether each combination of sensitive feature is receiving the positive outcome (true prediction) at equal rates.In this context, we define a feature balance measure, also referred to as the parity, for label y as the absolute difference between the association metrics of two different sensitive classes $[x_A, x_B]$, with respect to the association metric $A(x_i, y)$. That is:$$parity(y \vert x_A, x_B, A(\cdot)) \coloneqq A(x_A, y) - A(x_B, y)$$Using the dataset, we can see if the various sexes and races are receiving >50k income at equal or unequal rates.Note: Many of these metrics were influenced by this paper [Measuring Model Biases in the Absence of Ground Truth](https://arxiv.org/abs/2103.03417).| Association Metric | Family | Description | Interpretation/Formula | Reference ||----------------------------------------------------|-----------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------|| Demographic Parity | Fairness | Proportion of each segment of a protected class (e.g. gender) should receive the positive outcome at equal rates. | As close to 0 means better parity. $DP = P(Y \vert A = "Male") - P(Y \vert A = "Female")$. | [Link](https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29) || Pointwise Mutual Information (PMI), normalized PMI | Entropy | The PMI of a pair of feature values (ex: Gender=Male and Gender=Female) quantifies the discrepancy between the probability of their coincidence given their joint distribution and their individual distributions (assuming independence). | Range (normalized) $[-1, 1]$. -1 for no co-occurences. 0 for co-occurences at random. 1 for complete co-occurences. | [Link](https://en.wikipedia.org/wiki/Pointwise_mutual_information) || Sorensen-Dice Coefficient (SDC) | Intersection-over-Union | Used to gauge the similarity of two samples. Related to F1 score. | Equals twice the number of elements common to both sets divided by the sum of the number of elements in each set. | [Link](https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient) || Jaccard Index | Intersection-over-Union | Similar to SDC, guages the similarity and diversity of sample sets. | Equals the size of the intersection divided by the size of the union of the sample sets. | [Link](https://en.wikipedia.org/wiki/Jaccard_index) || Kendall Rank Correlation | Correlation and Statistical Tests | Used to measure the ordinal association between two measured quantities. | High when observations have a similar rank and low when observations have a dissimilar rank between the two variables. | [Link](https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient) || Log-Likelihood Ratio | Correlation and Statistical Tests | Calculates the degree to which data supports one variable versus another. Log of the likelihood ratio, which gives the probability of correctly predicting the label in ratio to probability of incorrectly predicting label. | If likelihoods are similar, it should be close to 0. | [Link](https://en.wikipedia.org/wiki/Likelihood_functionLikelihood_ratio) || t-test | Correlation and Statistical Tests | Used to compare the means of two groups (pairwise). | Value looked up in t-Distribution tell if statistically significant or not. | [Link](https://en.wikipedia.org/wiki/Student's_t-test) |
###Code
from synapse.ml.exploratory import FeatureBalanceMeasure
feature_balance_measures = (
FeatureBalanceMeasure()
.setSensitiveCols(cols_of_interest)
.setLabelCol(label_col)
.setVerbose(True)
.transform(df)
)
# Sort by Demographic Parity descending for all features
display(feature_balance_measures.sort(F.abs("FeatureBalanceMeasure.dp").desc()))
# Drill down to feature == "sex"
display(feature_balance_measures.filter(F.col("FeatureName") == "sex").sort(F.abs("FeatureBalanceMeasure.dp").desc()))
# Drill down to feature == "race"
display(feature_balance_measures.filter(F.col("FeatureName") == "race").sort(F.abs("FeatureBalanceMeasure.dp").desc()))
###Output
_____no_output_____
###Markdown
Visualize Feature Balance Measures
###Code
races = [row["race"] for row in df.groupBy("race").count().select("race").collect()]
dp_rows = feature_balance_measures.filter(F.col("FeatureName") == "race").select("ClassA", "ClassB", "FeatureBalanceMeasure.dp").collect()
race_dp_values = [(row["ClassA"], row["ClassB"], row["dp"]) for row in dp_rows]
race_dp_array = np.zeros((len(races), len(races)))
for class_a, class_b, dp_value in race_dp_values:
i, j = races.index(class_a), races.index(class_b)
dp_value = round(dp_value, 2)
race_dp_array[i, j] = dp_value
race_dp_array[j, i] = -1 * dp_value
colormap = "RdBu"
dp_min, dp_max = -1.0, 1.0
fig, ax = plt.subplots()
im = ax.imshow(race_dp_array, vmin=dp_min, vmax=dp_max, cmap=colormap)
cbar = ax.figure.colorbar(im, ax=ax)
cbar.ax.set_ylabel("Demographic Parity", rotation=-90, va="bottom")
ax.set_xticks(np.arange(len(races)))
ax.set_yticks(np.arange(len(races)))
ax.set_xticklabels(races)
ax.set_yticklabels(races)
plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")
for i in range(len(races)):
for j in range(len(races)):
text = ax.text(j, i, race_dp_array[i, j], ha="center", va="center", color="k")
ax.set_title("Demographic Parity of Races in Adult Dataset")
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
 Interpret Feature Balance MeasuresDemographic Parity:* When it is positive, it means that ClassA sees the positive outcome more than ClassB.* When it is negative, it means that ClassB sees the positive outcome more than ClassA.---From the results, we can tell the following:For Sex:* DP(Male, Female) = 0.1963 shows "Male" observations are associated with ">50k" income label more often than "Female" observations.For Race:* DP(Other, Asian-Pac-Islander) = -0.1734 shows "Other" observations are associated with ">50k" income label less than "Asian-Pac-Islander" observations.* DP(White, Other) = 0.1636 shows "White" observations are associated with ">50k" income label more often than "Other" observations.* DP(Asian-Pac-Islander, Amer-Indian-Eskimo) = 0.1494 shows "Asian-Pac-Islander" observations are associated with ">50k" income label more often than "Amer-Indian-Eskimo" observations.Again, you can take mitigation steps to upsample/downsample your data to be less biased towards certain features and feature values.Built-in mitigation steps are coming soon. Calculate Distribution Balance MeasuresDistribution Balance Measures allow us to compare our data with a reference distribution (i.e. uniform distribution). They are calculated per sensitive column and don't use the label column.For example, let's assume we have a dataset with 9 rows and a Gender column, and we observe that:* "Male" appears 4 times* "Female" appears 3 times* "Other" appears 2 timesAssuming the uniform distribution:$$ReferenceCount \coloneqq \frac{numRows}{numFeatureValues}$$$$ReferenceProbability \coloneqq \frac{1}{numFeatureValues}$$| Feature Value | Observed Count | Reference Count | Observed Probability | Reference Probability ||---------------|----------------|-----------------|----------------------|-----------------------|| Male | 4 | 9/3 = 3 | 4/9 = 0.44 | 3/9 = 0.33 || Female | 3 | 9/3 = 3 | 3/9 = 0.33 | 3/9 = 0.33 || Other | 2 | 9/3 = 3 | 2/9 = 0.22 | 3/9 = 0.33 |We can use distance measures to find out how far our observed and reference distributions of these feature values are. Some of these distance measures include:| Measure | Description | Interpretation | Reference ||--------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|| KL Divergence | Measure of how one probability distribution is different from a second, reference probability distribution. Measure of the information gained when one revises one's beliefs from the prior probability distribution Q to the posterior probability distribution P. In other words, it is the amount of information lost when Q is used to approximate P. | Non-negative. 0 means P = Q. | [Link](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) || JS Distance | Measuring the similarity between two probability distributions. Symmetrized and smoothed version of the Kullback–Leibler (KL) divergence. Square root of JS Divergence. | Range [0, 1]. 0 means perfectly same to balanced distribution. | [Link](https://en.wikipedia.org/wiki/Jensen%E2%80%93Shannon_divergence) || Wasserstein Distance | This distance is also known as the earth mover’s distance, since it can be seen as the minimum amount of “work” required to transform u into v, where “work” is measured as the amount of distribution weight that must be moved, multiplied by the distance it has to be moved. | Non-negative. 0 means P = Q. | [Link](https://en.wikipedia.org/wiki/Wasserstein_metric) || Infinity Norm Distance | Distance between two vectors is the greatest of their differences along any coordinate dimension. Also called Chebyshev distance or chessboard distance. | Non-negative. 0 means same distribution. | [Link](https://en.wikipedia.org/wiki/Chebyshev_distance) || Total Variation Distance | It is equal to half the L1 (Manhattan) distance between the two distributions. Take the difference between the two proportions in each category, add up the absolute values of all the differences, and then divide the sum by 2. | Non-negative. 0 means same distribution. | [Link](https://en.wikipedia.org/wiki/Total_variation_distance_of_probability_measures) || Chi-Squared Test | The chi-square test tests the null hypothesis that the categorical data has the given frequencies given expected frequencies in each category. | p-value gives evidence against null-hypothesis that difference in observed and expected frequencies is by random chance. | [Link](https://en.wikipedia.org/wiki/Chi-squared_test) |
###Code
from synapse.ml.exploratory import DistributionBalanceMeasure
distribution_balance_measures = (
DistributionBalanceMeasure()
.setSensitiveCols(cols_of_interest)
.transform(df)
)
# Sort by JS Distance descending
display(distribution_balance_measures.sort(F.abs("DistributionBalanceMeasure.js_dist").desc()))
###Output
_____no_output_____
###Markdown
Visualize Distribution Balance Measures
###Code
distribution_rows = distribution_balance_measures.collect()
race_row = [row for row in distribution_rows if row["FeatureName"] == "race"][0]["DistributionBalanceMeasure"]
sex_row = [row for row in distribution_rows if row["FeatureName"] == "sex"][0]["DistributionBalanceMeasure"]
measures_of_interest = ["kl_divergence", "js_dist", "inf_norm_dist", "total_variation_dist", "wasserstein_dist"]
race_measures = [round(race_row[measure], 4) for measure in measures_of_interest]
sex_measures = [round(sex_row[measure], 4) for measure in measures_of_interest]
x = np.arange(len(measures_of_interest))
width = 0.35
fig, ax = plt.subplots()
rects1 = ax.bar(x - width/2, race_measures, width, label="Race")
rects2 = ax.bar(x + width/2, sex_measures, width, label="Sex")
ax.set_xlabel("Measure")
ax.set_ylabel("Value")
ax.set_title("Distribution Balance Measures of Sex and Race in Adult Dataset")
ax.set_xticks(x)
ax.set_xticklabels(measures_of_interest)
ax.legend()
plt.setp(ax.get_xticklabels(), rotation=20, ha="right", rotation_mode="default")
def autolabel(rects):
for rect in rects:
height = rect.get_height()
ax.annotate('{}'.format(height),
xy=(rect.get_x() + rect.get_width() / 2, height),
xytext=(0, 1), # 1 point vertical offset
textcoords="offset points",
ha='center', va='bottom')
autolabel(rects1)
autolabel(rects2)
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
 Interpret Distribution Balance MeasuresRace has a JS Distance of 0.5104 while Sex has a JS Distance of 0.1217.Knowing that JS Distance is between [0, 1] where 0 means perfectly balanced distribution, we can tell that:* There is a larger disparity between various races than various sexes in our dataset.* Race is nowhere close to a perfectly balanced distribution (i.e. some races are seen ALOT more than others in our dataset).* Sex is fairly close to a perfectly balanced distribution. Calculate Aggregate Balance MeasuresAggregate Balance Measures allow us to obtain a higher notion of inequality. They are calculated on the global set of sensitive columns and don't use the label column.These measures look at distribution of records across all combinations of sensitive columns. For example, if Sex and Race are sensitive columns, it shall try to quantify imbalance across all combinations - (Male, Black), (Female, White), (Male, Asian-Pac-Islander), etc.| Measure | Description | Interpretation | Reference ||----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------|| Atkinson Index | It presents the percentage of total income that a given society would have to forego in order to have more equal shares of income between its citizens. This measure depends on the degree of society aversion to inequality (a theoretical parameter decided by the researcher), where a higher value entails greater social utility or willingness by individuals to accept smaller incomes in exchange for a more equal distribution. An important feature of the Atkinson index is that it can be decomposed into within-group and between-group inequality. | Range $[0, 1]$. 0 if perfect equality. 1 means maximum inequality. In our case, it is the proportion of records for a sensitive columns’ combination. | [Link](https://en.wikipedia.org/wiki/Atkinson_index) || Theil T Index | GE(1) = Theil's T and is more sensitive to differences at the top of the distribution. The Theil index is a statistic used to measure economic inequality. The Theil index measures an entropic "distance" the population is away from the "ideal" egalitarian state of everyone having the same income. | If everyone has the same income, then T_T equals 0. If one person has all the income, then T_T gives the result $ln(N)$. 0 means equal income and larger values mean higher level of disproportion. | [Link](https://en.wikipedia.org/wiki/Theil_index) || Theil L Index | GE(0) = Theil's L and is more sensitive to differences at the lower end of the distribution. Logarithm of (mean income)/(income i), over all the incomes included in the summation. It is also referred to as the mean log deviation measure. Because a transfer from a larger income to a smaller one will change the smaller income's ratio more than it changes the larger income's ratio, the transfer-principle is satisfied by this index. | Same interpretation as Theil T Index. | [Link](https://en.wikipedia.org/wiki/Theil_index) |
###Code
from synapse.ml.exploratory import AggregateBalanceMeasure
aggregate_balance_measures = (
AggregateBalanceMeasure()
.setSensitiveCols(cols_of_interest)
.transform(df)
)
display(aggregate_balance_measures)
###Output
_____no_output_____
###Markdown
Data Balance Analysis using the Adult Census Income datasetIn this example, we will conduct Data Balance Analysis (which consists on running three groups of measures) on the Adult Census Income dataset to determine how well features and feature values are represented in the dataset.This dataset can be used to predict whether annual income exceeds $50,000/year or not based on demographic data from the 1994 U.S. Census. The dataset we're reading contains 32,561 rows and 14 columns/features.[More info on the dataset here](https://archive.ics.uci.edu/ml/datasets/Adult)---Data Balance Analysis is relevant for overall understanding of datasets, but it becomes essential when thinking about building Machine Learning services out of such datasets. Having a well balanced data representation is critical when developing models in a responsible way, specially in terms of fairness. It is unfortunately all too easy to build an ML model that produces biased results for subsets of an overall population, by training or testing the model on biased ground truth data. There are multiple case studies of biased models assisting in granting loans, healthcare, recruitment opportunities and many other decision making tasks. In most of these examples, the data from which these models are trained was the common issue. These findings emphasize how important it is for model creators and auditors to analyze data balance: to measure training data across sub-populations and ensure the data has good coverage and a balanced representation of labels across sensitive categories and category combinations, and to check that test data is representative of the target population.In summary, Data Balance Analysis, used as a step for building ML models has the following benefits:* **Reduces risks for unbalanced models (facilitate service fairness) and reduces costs of ML building** by identifying early on data representation gaps that prompt data scientists to seek mitigation steps (collect more data, follow a specific sampling mechanism, create synthetic data, etc.) before proceeding to train their models. * **Enables easy e2e debugging of ML systems** in combination with [Fairlearn](https://fairlearn.org/) by providing a clear view if for an unbalanced model the issue is tied to the data or the model. ---Note: If you are running this notebook in a Spark environment such as Azure Synapse or Databricks, then you can easily visualize the imbalance measures using the built-in plotting features.Python dependencies:* matplotlib==3.2.2* numpy==1.19.2
###Code
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pyspark.sql.functions as F
df = spark.read.parquet("wasbs://[email protected]/AdultCensusIncome.parquet")
display(df)
# Convert the "income" column from {<=50K, >50K} to {0, 1} to represent our binary classification label column
label_col = "income"
df = df.withColumn(label_col, F.when(F.col(label_col).contains("<=50K"), F.lit(0)).otherwise(F.lit(1)))
###Output
_____no_output_____
###Markdown
Perform preliminary analysis on columns of interest
###Code
display(df.groupBy("race").count())
display(df.groupBy("sex").count())
# Choose columns/features to do data balance analysis on
cols_of_interest = ["race", "sex"]
display(df.select(cols_of_interest + [label_col]))
###Output
_____no_output_____
###Markdown
Calculate Feature Balance MeasuresFeature Balance Measures allow us to see whether each combination of sensitive feature is receiving the positive outcome (true prediction) at equal rates.In this context, we define a feature balance measure, also referred to as the parity, for label y as the absolute difference between the association metrics of two different sensitive classes \\([x_A, x_B]\\), with respect to the association metric \\(A(x_i, y)\\). That is:$$parity(y \vert x_A, x_B, A(\cdot)) \coloneqq A(x_A, y) - A(x_B, y) $$Using the dataset, we can see if the various sexes and races are receiving >50k income at equal or unequal rates.Note: Many of these metrics were influenced by this paper [Measuring Model Biases in the Absence of Ground Truth](https://arxiv.org/abs/2103.03417).Measure | Family | Description | Interpretation/Formula | Reference- | - | - | - | -Demographic Parity | Fairness | Proportion of each segment of a protected class (e.g. gender) should receive the positive outcome at equal rates. | As close to 0 means better parity. \\(DP = P(Y \vert A = "Male") - P(Y \vert A = "Female")\\). Y = Positive label rate. | [Link](https://en.wikipedia.org/wiki/Fairness_%28machine_learning%29)Pointwise Mutual Information (PMI), normalized PMI | Entropy | The PMI of a pair of feature values (ex: Gender=Male and Gender=Female) quantifies the discrepancy between the probability of their coincidence given their joint distribution and their individual distributions (assuming independence). | Range (normalized) [-1, 1]. -1 for no co-occurences. 0 for co-occurences at random. 1 for complete co-occurences. | [Link](https://en.wikipedia.org/wiki/Pointwise_mutual_information)Sorensen-Dice Coefficient (SDC) | Intersection-over-Union | Used to gauge the similarity of two samples. Related to F1 score. | Equals twice the number of elements common to both sets divided by the sum of the number of elements in each set. | [Link](https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient)Jaccard Index | Intersection-over-Union | Similar to SDC, guages the similarity and diversity of sample sets. | Equals the size of the intersection divided by the size of the union of the sample sets. | [Link](https://en.wikipedia.org/wiki/Jaccard_index)Kendall Rank Correlation | Correlation and Statistical Tests | Used to measure the ordinal association between two measured quantities. | High when observations have a similar rank and low when observations have a dissimilar rank between the two variables. | [Link](https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient)Log-Likelihood Ratio | Correlation and Statistical Tests | Calculates the degree to which data supports one variable versus another. Log of the likelihood ratio, which gives the probability of correctly predicting the label in ratio to probability of incorrectly predicting label. | If likelihoods are similar, it should be close to 0. | [Link](https://en.wikipedia.org/wiki/Likelihood_functionLikelihood_ratio)t-test | Correlation and Statistical Tests | Used to compare the means of two groups (pairwise). | Value looked up in t-Distribution tell if statistically significant or not. | [Link](https://en.wikipedia.org/wiki/Student's_t-test)
###Code
from synapse.ml.exploratory import FeatureBalanceMeasure
feature_balance_measures = (
FeatureBalanceMeasure()
.setSensitiveCols(cols_of_interest)
.setLabelCol(label_col)
.setVerbose(True)
.transform(df)
)
# Sort by Demographic Parity descending for all features
display(feature_balance_measures.sort(F.abs("FeatureBalanceMeasure.dp").desc()))
# Drill down to feature == "sex"
display(feature_balance_measures.filter(F.col("FeatureName") == "sex").sort(F.abs("FeatureBalanceMeasure.dp").desc()))
# Drill down to feature == "race"
display(feature_balance_measures.filter(F.col("FeatureName") == "race").sort(F.abs("FeatureBalanceMeasure.dp").desc()))
###Output
_____no_output_____
###Markdown
Visualize Feature Balance Measures
###Code
races = [row["race"] for row in df.groupBy("race").count().select("race").collect()]
dp_rows = feature_balance_measures.filter(F.col("FeatureName") == "race").select("ClassA", "ClassB", "FeatureBalanceMeasure.dp").collect()
race_dp_values = [(row["ClassA"], row["ClassB"], row["dp"]) for row in dp_rows]
race_dp_array = np.zeros((len(races), len(races)))
for class_a, class_b, dp_value in race_dp_values:
i, j = races.index(class_a), races.index(class_b)
dp_value = round(dp_value, 2)
race_dp_array[i, j] = dp_value
race_dp_array[j, i] = -1 * dp_value
colormap = "RdBu"
dp_min, dp_max = -1.0, 1.0
fig, ax = plt.subplots()
im = ax.imshow(race_dp_array, vmin=dp_min, vmax=dp_max, cmap=colormap)
cbar = ax.figure.colorbar(im, ax=ax)
cbar.ax.set_ylabel("Demographic Parity", rotation=-90, va="bottom")
ax.set_xticks(np.arange(len(races)))
ax.set_yticks(np.arange(len(races)))
ax.set_xticklabels(races)
ax.set_yticklabels(races)
plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")
for i in range(len(races)):
for j in range(len(races)):
text = ax.text(j, i, race_dp_array[i, j], ha="center", va="center", color="k")
ax.set_title("Demographic Parity of Races in Adult Dataset")
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
 Interpret Feature Balance MeasuresDemographic Parity:* When it is positive, it means that ClassA sees the positive outcome more than ClassB.* When it is negative, it means that ClassB sees the positive outcome more than ClassA.---From the results, we can tell the following:For Sex:* DP(Male, Female) = 0.1963 shows "Male" observations are associated with ">50k" income label more often than "Female" observations.For Race:* DP(Other, Asian-Pac-Islander) = -0.1734 shows "Other" observations are associated with ">50k" income label less than "Asian-Pac-Islander" observations.* DP(White, Other) = 0.1636 shows "White" observations are associated with ">50k" income label more often than "Other" observations.* DP(Asian-Pac-Islander, Amer-Indian-Eskimo) = 0.1494 shows "Asian-Pac-Islander" observations are associated with ">50k" income label more often than "Amer-Indian-Eskimo" observations.Again, you can take mitigation steps to upsample/downsample your data to be less biased towards certain features and feature values.Built-in mitigation steps are coming soon. Calculate Distribution Balance MeasuresDistribution Balance Measures allow us to compare our data with a reference distribution (i.e. uniform distribution). They are calculated per sensitive column and don't use the label column.For example, let's assume we have a dataset with 9 rows and a Gender column, and we observe that:* "Male" appears 4 times* "Female" appears 3 times* "Other" appears 2 timesAssuming the uniform distribution:$$ReferenceCount \coloneqq \frac{numRows}{numFeatureValues}$$$$ReferenceProbability \coloneqq \frac{1}{numFeatureValues}$$Feature Value | Observed Count | Reference Count | Observed Probability | Reference Probabiliy- | - | - | - | -Male | 4 | 9/3 = 3 | 4/9 = 0.44 | 3/9 = 0.33Female | 3 | 9/3 = 3 | 3/9 = 0.33 | 3/9 = 0.33Other | 2 | 9/3 = 3 | 2/9 = 0.22 | 3/9 = 0.33We can use distance measures to find out how far our observed and reference distributions of these feature values are. Some of these distance measures include:Measure | Description | Interpretation | Reference- | - | - | -KL Divergence | Measure of how one probability distribution is different from a second, reference probability distribution. Measure of the information gained when one revises one's beliefs from the prior probability distribution Q to the posterior probability distribution P. In other words, it is the amount of information lost when Q is used to approximate P. | Non-negative. 0 means P = Q. | [Link](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence)JS Distance | Measuring the similarity between two probability distributions. Symmetrized and smoothed version of the Kullback–Leibler (KL) divergence. Square root of JS Divergence. | Range [0, 1]. 0 means perfectly same to balanced distribution. | [Link](https://en.wikipedia.org/wiki/Jensen%E2%80%93Shannon_divergence)Wasserstein Distance | This distance is also known as the earth mover’s distance, since it can be seen as the minimum amount of “work” required to transform u into v, where “work” is measured as the amount of distribution weight that must be moved, multiplied by the distance it has to be moved. | Non-negative. 0 means P = Q. | [Link](https://en.wikipedia.org/wiki/Wasserstein_metric)Infinity Norm Distance | Distance between two vectors is the greatest of their differences along any coordinate dimension. Also called Chebyshev distance or chessboard distance. | Non-negative. 0 means same distribution. | [Link](https://en.wikipedia.org/wiki/Chebyshev_distance)Total Variation Distance | It is equal to half the L1 (Manhattan) distance between the two distributions. Take the difference between the two proportions in each category, add up the absolute values of all the differences, and then divide the sum by 2. | Non-negative. 0 means same distribution. | [Link](https://en.wikipedia.org/wiki/Total_variation_distance_of_probability_measures)Chi-Squared Test | The chi-square test tests the null hypothesis that the categorical data has the given frequencies given expected frequencies in each category. | p-value gives evidence against null-hypothesis that difference in observed and expected frequencies is by random chance. | [Link](https://en.wikipedia.org/wiki/Chi-squared_test)
###Code
from synapse.ml.exploratory import DistributionBalanceMeasure
distribution_balance_measures = (
DistributionBalanceMeasure()
.setSensitiveCols(cols_of_interest)
.transform(df)
)
# Sort by JS Distance descending
display(distribution_balance_measures.sort(F.abs("DistributionBalanceMeasure.js_dist").desc()))
###Output
_____no_output_____
###Markdown
Visualize Distribution Balance Measures
###Code
distribution_rows = distribution_balance_measures.collect()
race_row = [row for row in distribution_rows if row["FeatureName"] == "race"][0]["DistributionBalanceMeasure"]
sex_row = [row for row in distribution_rows if row["FeatureName"] == "sex"][0]["DistributionBalanceMeasure"]
measures_of_interest = ["kl_divergence", "js_dist", "inf_norm_dist", "total_variation_dist", "wasserstein_dist"]
race_measures = [round(race_row[measure], 4) for measure in measures_of_interest]
sex_measures = [round(sex_row[measure], 4) for measure in measures_of_interest]
x = np.arange(len(measures_of_interest))
width = 0.35
fig, ax = plt.subplots()
rects1 = ax.bar(x - width/2, race_measures, width, label="Race")
rects2 = ax.bar(x + width/2, sex_measures, width, label="Sex")
ax.set_xlabel("Measure")
ax.set_ylabel("Value")
ax.set_title("Distribution Balance Measures of Sex and Race in Adult Dataset")
ax.set_xticks(x)
ax.set_xticklabels(measures_of_interest)
ax.legend()
plt.setp(ax.get_xticklabels(), rotation=20, ha="right", rotation_mode="default")
def autolabel(rects):
for rect in rects:
height = rect.get_height()
ax.annotate('{}'.format(height),
xy=(rect.get_x() + rect.get_width() / 2, height),
xytext=(0, 1), # 1 point vertical offset
textcoords="offset points",
ha='center', va='bottom')
autolabel(rects1)
autolabel(rects2)
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
 Interpret Distribution Balance MeasuresRace has a JS Distance of 0.5104 while Sex has a JS Distance of 0.1217.Knowing that JS Distance is between [0, 1] where 0 means perfectly balanced distribution, we can tell that:* There is a larger disparity between various races than various sexes in our dataset.* Race is nowhere close to a perfectly balanced distribution (i.e. some races are seen ALOT more than others in our dataset).* Sex is fairly close to a perfectly balanced distribution. Calculate Aggregate Balance MeasuresAggregate Balance Measures allow us to obtain a higher notion of inequality. They are calculated on the global set of sensitive columns and don't use the label column.These measures look at distribution of records across all combinations of sensitive columns. For example, if Sex and Race are sensitive columns, it shall try to quantify imbalance across all combinations - (Male, Black), (Female, White), (Male, Asian-Pac-Islander), etc.Measure | Description | Interpretation | Reference- | - | - | -Atkinson Index | It presents the percentage of total income that a given society would have to forego in order to have more equal shares of income between its citizens. This measure depends on the degree of society aversion to inequality (a theoretical parameter decided by the researcher), where a higher value entails greater social utility or willingness by individuals to accept smaller incomes in exchange for a more equal distribution. An important feature of the Atkinson index is that it can be decomposed into within-group and between-group inequality. | Range [0, 1]. 0 if perfect equality. 1 means maximum inequality. In our case, it is the proportion of records for a sensitive columns’ combination. | [Link](https://en.wikipedia.org/wiki/Atkinson_index)Theil T Index | GE(1) = Theil's T and is more sensitive to differences at the top of the distribution. The Theil index is a statistic used to measure economic inequality. The Theil index measures an entropic "distance" the population is away from the "ideal" egalitarian state of everyone having the same income. | If everyone has the same income, then T_T equals 0. If one person has all the income, then T_T gives the result (ln N). 0 means equal income and larger values mean higher level of disproportion. | [Link](https://en.wikipedia.org/wiki/Theil_index)Theil L Index | GE(0) = Theil's L and is more sensitive to differences at the lower end of the distribution. Logarithm of (mean income)/(income i), over all the incomes included in the summation. It is also referred to as the mean log deviation measure. Because a transfer from a larger income to a smaller one will change the smaller income's ratio more than it changes the larger income's ratio, the transfer-principle is satisfied by this index. | Same interpretation as Theil T Index. | [Link](https://en.wikipedia.org/wiki/Theil_index)
###Code
from synapse.ml.exploratory import AggregateBalanceMeasure
aggregate_balance_measures = (
AggregateBalanceMeasure()
.setSensitiveCols(cols_of_interest)
.transform(df)
)
display(aggregate_balance_measures)
###Output
_____no_output_____
|
python_expert_notebook.ipynb
|
###Markdown
What Does It Take to Be An Expert At Python Notebook based off James Powell's talk at PyData 2017'https://www.youtube.com/watch?v=7lmCu8wz8roIf you want to become an expert in Python, you should definitely watch this PyData talk from James Powell. Video Index* metaclasses: 18:50* metaclasses(explained): 40:40* decorator: 45:20* generator: 1:04:30* context manager: 1:22:37* summary: 1:40:00 **Definitions** Python is a language orientated around protocols - Some behavior or syntax or bytecode or some top level function and there is a way to tell python how to implement that on an arbitrary object via underscore methods. The exact correspondance is usually guessable, but if you can't guess it you can it... google python data model**Metaclass** Mechanism: Some hook into the class construction process. Questions: Do you have these methods implemented. Meaning: Library code & User code? How do you enforce a constraint? **Decorator** Hooks into idea that everything creates a structure at run time. Wrap sets of functions with a before and after behavior. **Generators** Take a single computation that would otherwise run eagerly from the injection of its parameters to the final computation and interleaving with other code by adding yield points where you can yield the intermediate result values or one small piece of the computation and also yield back to the caller. Think of a generator of a way to take one long piece of computation and break it up into small parts. **Context managers** Two structures that allow you to tie two actions together. A setup action and a teardown action and make sure they always happen in concordance with each other.
###Code
# some behavior that I want to implement -> write some __ function __
# top-level function or top-level syntax -> corresponding __
# x + y -> __add__
# init x -> __init__
# repr(x) --> __repr__
# x() -> __call__
class Polynomial:
def __init__(self, *coeffs):
self.coeffs = coeffs
def __repr__(self):
return 'Polynomial(*{!r})'.format(self.coeffs)
def __add__(self, other):
return Polynomial(*(x + y for x, y in zip(self.coeffs, other.coeffs)))
def __len__(self):
return len(self.coeffs)
def __call__(self):
pass
###Output
_____no_output_____
###Markdown
3 Core Patterns to understand object orientation* Protocol view of python* Built-in inheritance protocol (where to go)* Caveats around how object orientation in python works
###Code
p1 = Polynomial(1, 2, 3)
p2 = Polynomial(3, 4, 3)
p1 + p2
len(p1)
###Output
_____no_output_____
###Markdown
Metaclasses
###Code
# File 1 - library.py
class Base:
def food(self):
return 'foo'
# File2 - user.py
assert hasattr(Base, 'foo'), "you broke it, you fool!"
class Derived(Base):
def bar(self):
return self.foo
# File 1 - library.py
class Base:
def foo(self):
return self.bar()
# File2 - user.py
assert hasattr(Base, 'foo'), "you broke it, you fool!"
class Derived(Base):
def bar(self):
return 'bar'
Derived.bar
def _():
class Base:
pass
from dis import dis
dis(_) # LOAD_BUILD_CLASS
# Catch Building of Classes
class Base:
def foo(self):
return self.bar()
old_bc = __build_class__
def my_bc(*a, **kw):
print('my buildclass ->', a, kw)
return old_bc(*a, **kw)
import builtins
builtins.__build_class__ = my_bc
# Catch Building of Classes
class Base:
def foo(self):
return self.bar()
old_bc = __build_class__
def my_bc(fun, name, base=None, **kw):
if base is Base:
print('Check if bar method defined')
if base is not None:
return old_bc(fun, name, base, **kw)
return old_bc(fun, name, **kw)
import builtins
builtins.__build_class__ = my_bc
import builtins
import importlib
importlib.reload(builtins)
class BaseMeta(type):
def __new__(cls, name, bases, body):
print('BaseMeta.__new__', cls, name, bases, body)
return super().__new__(cls, name, bases, body)
class Base(metaclass=BaseMeta):
def foo(self):
return self.bar()
class BaseMeta(type):
def __new__(cls, name, bases, body):
if not 'bar' in body:
raise TypeError('bad user class')
return super().__new__(cls, name, bases, body)
class Base(metaclass=BaseMeta):
def foo(self):
return self.bar()
class BaseMeta(type):
def __new__(cls, name, bases, body):
if name != 'Base' and not 'bar' in body:
raise TypeError('bad user class')
return super().__new__(cls, name, bases, body)
class Base(metaclass=BaseMeta):
def foo(self):
return self.bar()
def __init_subclass__(*a, **kw):
print('init_subclass', a, kw)
return super().__init_subclass__(*a, **kw)
help(Base.__init_subclass__)
###Output
Help on method __init_subclass__ in module __main__:
__init_subclass__(*a, **kw) method of __main__.BaseMeta instance
This method is called when a class is subclassed.
The default implementation does nothing. It may be
overridden to extend subclasses.
###Markdown
Decorators
###Code
# dec.py
def add(x, y=10):
return x + y
add(10, 20)
add
# Name of function
add.__name__
# What module function is assigned to
add.__module__
# Default values
add.__defaults__
# Byte code for function
add.__code__.co_code
# Variable names function interacts with
add.__code__.co_varnames
###Output
_____no_output_____
###Markdown
What's your source code?
###Code
from inspect import getsource
getsource(add)
print(getsource(add))
# What file are you in?
from inspect import getfile
getfile(add)
from inspect import getmodule
getmodule(add)
print('add(10)', add(10))
print('add(20, 30)', add(20, 30))
print('add("a", "b")', add("a", "b"))
#Count how long it took to run
def add_timer(x, y=10):
before = time()
rv = x + y
after = time()
print('elapsed:', after - before)
return rv
print('add(10)', add_timer(10))
print('add(20, 30)', add_timer(20, 30))
print('add("a", "b")', add_timer("a", "b"))
###Output
elapsed: 0.0
add(10) 20
elapsed: 9.5367431640625e-07
add(20, 30) 50
elapsed: 9.5367431640625e-07
add("a", "b") ab
###Markdown
But what if we have multiple functions that require timing?
###Code
def sub(x, y=10):
return x - y
print('sub(10)', sub(10))
print('sub(20, 30)', sub(20, 30))
def timer(func, x, y=10):
before = time()
rv = func(x, y)
after = time()
print('elapsed', after - before)
return rv
print('add(10)', timer(add, 10))
print('add(20, 30)', timer(add, 20, 30))
print('add("a", "b")', timer(add, "a", "b"))
def timer(func):
def f(x, y=10):
before = time()
rv = func(x, y)
after = time()
print('elapsed', after - before)
return rv
return f
add = timer(add)
print('add(10)', add(10))
print('add(20, 30)', add(20, 30))
print('add("a", "b")', add("a", "b"))
# Don't need to do add = timer(add) with decorators...
@timer
def add_dec(x, y=10):
return x + y
@timer
def sub_dec(x, y=10):
return x - y
print('add(10)', add_dec(10))
print('add(20, 30)', add_dec(20, 30))
print('add("a", "b")', add_dec("a", "b"))
print('sub(10)', sub_dec(10))
print('sub(20, 30)', sub_dec(20, 30))
# Don't hardcode parameters in decorator functions
def timer_k(func):
def f(*args, **kwargs):
before = time()
rv = func(*args, **kwargs)
after = time()
print('elapsed', after - before)
return rv
return f
@timer_k
def add_dec(x, y=10):
return x + y
@timer_k
def sub_dec(x, y=10):
return x - y
print('add(10)', add_dec(10))
print('add(20, 30)', add_dec(20, 30))
print('add("a", "b")', add_dec("a", "b"))
print('sub(10)', sub_dec(10))
print('sub(20, 30)', sub_dec(20, 30))
# What if I want to run a function n number of times
# Let's have add run 3 times in a row and sub run twice in a row
n = 2
def ntimes(f):
def wrapper(*args, **kwargs):
for _ in range(n):
print('running {.__name__}'.format(f))
rv = f(*args, **kwargs)
return rv
return wrapper
@ntimes
def add_dec(x, y=10):
return x + y
@ntimes
def sub_dec(x, y=10):
return x - y
print('add(10)', add_dec(10))
print('add(20, 30)', add_dec(20, 30))
print('add("a", "b")', add_dec("a", "b"))
print('sub(10)', sub_dec(10))
print('sub(20, 30)', sub_dec(20, 30))
###Output
running add_dec
running add_dec
add(10) 20
running add_dec
running add_dec
add(20, 30) 50
running add_dec
running add_dec
add("a", "b") ab
running sub_dec
running sub_dec
sub(10) 0
running sub_dec
running sub_dec
sub(20, 30) -10
###Markdown
Higher Order Decorators
###Code
def ntimes(n):
def inner(f):
def wrapper(*args, **kwargs):
for _ in range(n):
print('running {.__name__}'.format(f))
rv = f(*args, **kwargs)
return rv
return wrapper
return inner
@ntimes(2)
def add_hdec(x, y=10):
return x + y
@ntimes(4)
def sub_hdec(x, y=10):
return x - y
print('add(10)', add_hdec(10))
print('add(20, 30)', add_hdec(20, 30))
print('add("a", "b")', add_hdec("a", "b"))
print('sub(10)', sub_hdec(10))
print('sub(20, 30)', sub_hdec(20, 30))
###Output
running add_hdec
running add_hdec
add(10) 20
running add_hdec
running add_hdec
add(20, 30) 50
running add_hdec
running add_hdec
add("a", "b") ab
running sub_hdec
running sub_hdec
running sub_hdec
running sub_hdec
sub(10) 0
running sub_hdec
running sub_hdec
running sub_hdec
running sub_hdec
sub(20, 30) -10
###Markdown
Generators
###Code
# gen.py - use whenever sequencing is needd
# top-level syntax, function -> underscore method
# x() __call__
def add1(x, y):
return x + y
class Adder:
def __call__(self, x, y):
return x + y
add2 = Adder()
add1(10, 20)
add2(10, 20)
# top-level syntax, function -> underscore method
# x() __call__
def add1(x, y):
return x + y
class Adder:
def __init__(self):
self.z = 0
def __call__(self, x, y):
self.z += 1
return x + y + self.z
add2 = Adder()
from time import sleep
# This example has storage... and has eager return of the result sets
def compute():
rv = []
for i in range(10):
sleep(.5)
rv.append(i)
return rv
compute()
###Output
_____no_output_____
###Markdown
Wasteful because we have to wait for the entire action to complete and be read into memory, when we really just care about each number (one by one)
###Code
class Compute:
def __call__(self):
rv = []
for i in range(100000):
sleep(5)
rv.append(i)
return rv
def __iter__(self):
self.last = 0
return self
def __next__(self):
rv = self.last
self.last += 1
if self.last > 10:
raise StopIteration()
sleep(.5)
return self.last
compute = Compute()
# THIS IS UGLY... now let's make a generator
#This is a generator... don't eagerly compute. Return to user as they ask for it...
def compute():
for i in range(10):
sleep(.5)
yield i
# for x in xs:
# pass
# xi = iter(xs) -> __iter__
# while True:
# x = next(xi) -> __next__
for val in compute():
print(val)
class Api:
def run_this_first(self):
first()
def run_this_second(self):
second()
def run_this_last(self):
last()
def api():
first()
yield
second()
yield
last()
###Output
_____no_output_____
###Markdown
Context Manager
###Code
# cty.py
from sqlite3 import connect
# with ctx() as x:
# pass
# x = ctx().__enter__
# try:
# pass
# finally:
# x.__exit__
class temptable:
def __init__(self, cur):
self.cur = cur
def __enter__(self):
print('__enter__')
self.cur.execute('create table points(x int, y int)')
def __exit__(self, *args):
print('__exit__')
self.cur.execute('drop table points')
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
for row in cur.execute('select sum(x * y) from points'):
print(row)
rm test.db
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
yield
cur.execute('drop table points')
print('dropped table')
class contextmanager:
def __init__(self, cur):
self.cur = cur
def __enter__(self):
self.gen = temptable(self.cur)
next(self.gen)
def __exit__(self, *args):
next(self.gen, None)
with connect('test.db') as conn:
cur = conn.cursor()
with contextmanager(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
for row in cur.execute('select sum(x * y) from points'):
print(row)
class contextmanager:
def __init__(self, gen):
self.gen = gen
def __call__(self, *args, **kwargs):
self.args, self.kwargs = args, kwargs
return self
def __enter__(self):
self.gen_inst = self.gen(*self.args, **self.kwargs)
next(self.gen_inst)
def __exit__(self, *args):
next(self.gen_inst, None)
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
yield
cur.execute('drop table points')
print('dropped table')
temptable = contextmanager(temptable)
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
class contextmanager:
def __init__(self, gen):
self.gen = gen
def __call__(self, *args, **kwargs):
self.args, self.kwargs = args, kwargs
return self
def __enter__(self):
self.gen_inst = self.gen(*self.args, **self.kwargs)
next(self.gen_inst)
def __exit__(self, *args):
next(self.gen_inst, None)
@contextmanager
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
yield
cur.execute('drop table points')
print('dropped table')
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
from sqlite3 import connect
from contextlib import contextmanager
@contextmanager
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
try:
yield
finally:
cur.execute('drop table points')
print('dropped table')
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
###Output
created table
(1, 1)
(1, 2)
(2, 1)
(2, 2)
dropped table
###Markdown
What Does It Take to Be An Expert At Python Notebook based off James Powell's talk at PyData 2017'https://www.youtube.com/watch?v=7lmCu8wz8roIf you want to become an expert in Python, you should definitely watch this PyData talk from James Powell. Video Index* metaclasses: 18:50* metaclasses(explained): 40:40* decorator: 45:20* generator: 1:04:30* context manager: 1:22:37* summary: 1:40:00 **Definitions** Python is a language orientated around protocols - Some behavior or syntax or bytecode or some top level function and there is a way to tell python how to implement that on an arbitrary object via underscore methods. The exact correspondance is usually guessable, but if you can't guess it you can it... google python data model**Metaclass** Mechanism: Some hook into the class construction process. Questions: Do you have these methods implemented. Meaning: Library code & User code? How do you enforce a constraint? **Decorator** Hooks into idea that everything creates a structure at run time. Wrap sets of functions with a before and after behavior. **Generators** Take a single computation that would otherwise run eagerly from the injection of its parameters to the final computation and interleaving with other code by adding yield points where you can yield the intermediate result values or one small piece of the computation and also yield back to the caller. Think of a generator of a way to take one long piece of computation and break it up into small parts. **Context managers** Two structures that allow you to tie two actions together. A setup action and a teardown action and make sure they always happen in concordance with each other.
###Code
# some behavior that I want to implement -> write some __ function __
# top-level function or top-level syntax -> corresponding __
# x + y -> __add__
# init x -> __init__
# repr(x) --> __repr__
# x() -> __call__
class Polynomial:
def __init__(self, *coeffs):
self.coeffs = coeffs
def __repr__(self):
return 'Polynomial(*{!r})'.format(self.coeffs)
def __add__(self, other):
return Polynomial(*(x + y for x, y in zip(self.coeffs, other.coeffs)))
def __len__(self):
return len(self.coeffs)
def __call__(self):
pass
###Output
_____no_output_____
###Markdown
3 Core Patterns to understand object orientation* Protocol view of python* Built-in inheritance protocol (where to go)* Caveats around how object orientation in python works
###Code
p1 = Polynomial(1, 2, 3)
p2 = Polynomial(3, 4, 3)
p1 + p2
len(p1)
###Output
_____no_output_____
###Markdown
Metaclasses
###Code
# File 1 - library.py
class Base:
def food(self):
return 'foo'
# File2 - user.py
assert hasattr(Base, 'foo'), "you broke it, you fool!"
class Derived(Base):
def bar(self):
return self.foo
# File 1 - library.py
class Base:
def foo(self):
return self.bar()
# File2 - user.py
assert hasattr(Base, 'foo'), "you broke it, you fool!"
class Derived(Base):
def bar(self):
return 'bar'
Derived.bar
def _():
class Base:
pass
from dis import dis
dis(_) # LOAD_BUILD_CLASS
# Catch Building of Classes
class Base:
def foo(self):
return self.bar()
old_bc = __build_class__
def my_bc(*a, **kw):
print('my buildclass ->', a, kw)
return old_bc(*a, **kw)
import builtins
builtins.__build_class__ = my_bc
# Catch Building of Classes
class Base:
def foo(self):
return self.bar()
old_bc = __build_class__
def my_bc(fun, name, base=None, **kw):
if base is Base:
print('Check if bar method defined')
if base is not None:
return old_bc(fun, name, base, **kw)
return old_bc(fun, name, **kw)
import builtins
builtins.__build_class__ = my_bc
import builtins
import importlib
importlib.reload(builtins)
class BaseMeta(type):
def __new__(cls, name, bases, body):
print('BaseMeta.__new__', cls, name, bases, body)
return super().__new__(cls, name, bases, body)
class Base(metaclass=BaseMeta):
def foo(self):
return self.bar()
class BaseMeta(type):
def __new__(cls, name, bases, body):
if not 'bar' in body:
raise TypeError('bad user class')
return super().__new__(cls, name, bases, body)
class Base(metaclass=BaseMeta):
def foo(self):
return self.bar()
class BaseMeta(type):
def __new__(cls, name, bases, body):
if name != 'Base' and not 'bar' in body:
raise TypeError('bad user class')
return super().__new__(cls, name, bases, body)
class Base(metaclass=BaseMeta):
def foo(self):
return self.bar()
def __init_subclass__(*a, **kw):
print('init_subclass', a, kw)
return super().__init_subclass__(*a, **kw)
help(Base.__init_subclass__)
###Output
Help on method __init_subclass__ in module __main__:
__init_subclass__(*a, **kw) method of __main__.BaseMeta instance
This method is called when a class is subclassed.
The default implementation does nothing. It may be
overridden to extend subclasses.
###Markdown
Decorators
###Code
# dec.py
def add(x, y=10):
return x + y
add(10, 20)
add
# Name of function
add.__name__
# What module function is assigned to
add.__module__
# Default values
add.__defaults__
# Byte code for function
add.__code__.co_code
# Variable names function interacts with
add.__code__.co_varnames
###Output
_____no_output_____
###Markdown
What's your source code?
###Code
from inspect import getsource
getsource(add)
print(getsource(add))
# What file are you in?
from inspect import getfile
getfile(add)
from inspect import getmodule
getmodule(add)
print('add(10)', add(10))
print('add(20, 30)', add(20, 30))
print('add("a", "b")', add("a", "b"))
#Count how long it took to run
def add_timer(x, y=10):
before = time()
rv = x + y
after = time()
print('elapsed:', after - before)
return rv
print('add(10)', add_timer(10))
print('add(20, 30)', add_timer(20, 30))
print('add("a", "b")', add_timer("a", "b"))
###Output
elapsed: 0.0
add(10) 20
elapsed: 9.5367431640625e-07
add(20, 30) 50
elapsed: 9.5367431640625e-07
add("a", "b") ab
###Markdown
But what if we have multiple functions that require timing?
###Code
def sub(x, y=10):
return x - y
print('sub(10)', sub(10))
print('sub(20, 30)', sub(20, 30))
def timer(func, x, y=10):
before = time()
rv = func(x, y)
after = time()
print('elapsed', after - before)
return rv
print('add(10)', timer(add, 10))
print('add(20, 30)', timer(add, 20, 30))
print('add("a", "b"', timer(add, "a", "b"))
def timer(func):
def f(x, y=10):
before = time()
rv = func(x, y)
after = time()
print('elapsed', after - before)
return rv
return f
add = timer(add)
print('add(10)', add(10))
print('add(20, 30)', add(20, 30))
print('add("a", "b")', add("a", "b"))
# Don't need to do add = timer(add) with decorators...
@timer
def add_dec(x, y=10):
return x + y
@timer
def sub_dec(x, y=10):
return x - y
print('add(10)', add_dec(10))
print('add(20, 30)', add_dec(20, 30))
print('add("a", "b")', add_dec("a", "b"))
print('sub(10)', sub_dec(10))
print('sub(20, 30)', sub_dec(20, 30))
# Don't hardcode parameters in decorator functions
def timer_k(func):
def f(*args, **kwargs):
before = time()
rv = func(*args, **kwargs)
after = time()
print('elapsed', after - before)
return rv
return f
@timer_k
def add_dec(x, y=10):
return x + y
@timer_k
def sub_dec(x, y=10):
return x - y
print('add(10)', add_dec(10))
print('add(20, 30)', add_dec(20, 30))
print('add("a", "b")', add_dec("a", "b"))
print('sub(10)', sub_dec(10))
print('sub(20, 30)', sub_dec(20, 30))
# What if I want to run a function n number of times
# Let's have add run 3 times in a row and sub run twice in a row
n = 2
def ntimes(f):
def wrapper(*args, **kwargs):
for _ in range(n):
print('running {.__name__}'.format(f))
rv = f(*args, **kwargs)
return rv
return wrapper
@ntimes
def add_dec(x, y=10):
return x + y
@ntimes
def sub_dec(x, y=10):
return x - y
print('add(10)', add_dec(10))
print('add(20, 30)', add_dec(20, 30))
print('add("a", "b")', add_dec("a", "b"))
print('sub(10)', sub_dec(10))
print('sub(20, 30)', sub_dec(20, 30))
###Output
running add_dec
running add_dec
add(10) 20
running add_dec
running add_dec
add(20, 30) 50
running add_dec
running add_dec
add("a", "b") ab
running sub_dec
running sub_dec
sub(10) 0
running sub_dec
running sub_dec
sub(20, 30) -10
###Markdown
Higher Order Decorators
###Code
def ntimes(n):
def inner(f):
def wrapper(*args, **kwargs):
for _ in range(n):
print('running {.__name__}'.format(f))
rv = f(*args, **kwargs)
return rv
return wrapper
return inner
@ntimes(2)
def add_hdec(x, y=10):
return x + y
@ntimes(4)
def sub_hdec(x, y=10):
return x - y
print('add(10)', add_hdec(10))
print('add(20, 30)', add_hdec(20, 30))
print('add("a", "b")', add_hdec("a", "b"))
print('sub(10)', sub_hdec(10))
print('sub(20, 30)', sub_hdec(20, 30))
###Output
running add_hdec
running add_hdec
add(10) 20
running add_hdec
running add_hdec
add(20, 30) 50
running add_hdec
running add_hdec
add("a", "b") ab
running sub_hdec
running sub_hdec
running sub_hdec
running sub_hdec
sub(10) 0
running sub_hdec
running sub_hdec
running sub_hdec
running sub_hdec
sub(20, 30) -10
###Markdown
Generators
###Code
# gen.py - use whenever sequencing is needd
# top-level syntax, function -> underscore method
# x() __call__
def add1(x, y):
return x + y
class Adder:
def __call__(self, x, y):
return x + y
add2 = Adder()
add1(10, 20)
add2(10, 20)
# top-level syntax, function -> underscore method
# x() __call__
def add1(x, y):
return x + y
class Adder:
def __init__(self):
self.z = 0
def __call__(self, x, y):
self.z += 1
return x + y + self.z
add2 = Adder()
from time import sleep
# This example has storage... and has eager return of the result sets
def compute():
rv = []
for i in range(10):
sleep(.5)
rv.append(i)
return rv
compute()
###Output
_____no_output_____
###Markdown
Wasteful because we have to wait for the entire action to complete and be read into memory, when we really just care about each number (one by one)
###Code
class Compute:
def __call__(self):
rv = []
for i in range(100000):
sleep(5)
rv.append(i)
return rv
def __iter__(self):
self.last = 0
return self
def __next__(self):
rv = self.last
self.last += 1
if self.last > 10:
raise StopIteration()
sleep(.5)
return self.last
compute = Compute()
# THIS IS UGLY... now let's make a generator
#This is a generator... don't eagerly compute. Return to user as they ask for it...
def compute():
for i in range(10):
sleep(.5)
yield i
# for x in xs:
# pass
# xi = iter(xs) -> __iter__
# while True:
# x = next(xi) -> __next__
for val in compute():
print(val)
class Api:
def run_this_first(self):
first()
def run_this_second(self):
second()
def run_this_last(self):
last()
def api():
first()
yield
second()
yield
last()
###Output
_____no_output_____
###Markdown
Context Manager
###Code
# cty.py
from sqlite3 import connect
# with ctx() as x:
# pass
# x = ctx().__enter__
# try:
# pass
# finally:
# x.__exit__
class temptable:
def __init__(self, cur):
self.cur = cur
def __enter__(self):
print('__enter__')
self.cur.execute('create table points(x int, y int)')
def __exit__(self, *args):
print('__exit__')
self.cur.execute('drop table points')
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
for row in cur.execute('select sum(x * y) from points'):
print(row)
rm test.db
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
yield
cur.execute('drop table points')
print('dropped table')
class contextmanager:
def __init__(self, cur):
self.cur = cur
def __enter__(self):
self.gen = temptable(self.cur)
next(self.gen)
def __exit__(self, *args):
next(self.gen, None)
with connect('test.db') as conn:
cur = conn.cursor()
with contextmanager(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
for row in cur.execute('select sum(x * y) from points'):
print(row)
class contextmanager:
def __init__(self, gen):
self.gen = gen
def __call__(self, *args, **kwargs):
self.args, self.kwargs = args, kwargs
return self
def __enter__(self):
self.gen_inst = self.gen(*self.args, **self.kwargs)
next(self.gen_inst)
def __exit__(self, *args):
next(self.gen_inst, None)
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
yield
cur.execute('drop table points')
print('dropped table')
temptable = contextmanager(temptable)
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
class contextmanager:
def __init__(self, gen):
self.gen = gen
def __call__(self, *args, **kwargs):
self.args, self.kwargs = args, kwargs
return self
def __enter__(self):
self.gen_inst = self.gen(*self.args, **self.kwargs)
next(self.gen_inst)
def __exit__(self, *args):
next(self.gen_inst, None)
@contextmanager
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
yield
cur.execute('drop table points')
print('dropped table')
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
from sqlite3 import connect
from contextlib import contextmanager
@contextmanager
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
try:
yield
finally:
cur.execute('drop table points')
print('dropped table')
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
###Output
created table
(1, 1)
(1, 2)
(2, 1)
(2, 2)
dropped table
###Markdown
What Does It Take to Be An Expert At Python Notebook based off James Powell's talk at PyData 2017'https://www.youtube.com/watch?v=7lmCu8wz8roIf you want to become an expert in Python, you should definitely watch this PyData talk from James Powell. Video Index* metaclasses: 18:50* metaclasses(explained): 40:40* decorator: 45:20* generator: 1:04:30* context manager: 1:22:37* summary: 1:40:00 Definitions Python is a language orientated around protocols - Some behavior or syntax or bytecode or some top level function and there is a way to tell python how to implement that on an arbitrary object via underscore methods. The exact correspondance is usually guessable, but if you can't guess it you can it... google python data model MetaclassMechanism: Some hook into the class construction process. Questions: Do you have these methods implemented.Meaning: Library code & User code? How do you enforce a constraint? Enforce client library implement a certain method1. `__build__` class: least common 1. metaclasses: `class BaseMeta(type):` - override `__new__` 1. python3.6: `__init_subclass__` DecoratorHooks into idea that everything creates a structure at run time. Wrap sets of functions with a before and after behavior. GeneratorsTake a single computation that would otherwise run eagerly from the injection of its parameters to the final computation and interleaving with other code by adding yield points where you can yield the intermediate result values or one small piece of the computation and also yield back to the caller. Think of a generator of a way to take one long piece of computation and break it up into small parts. Context managersTwo structures that allow you to tie two actions together. A setup action and a teardown action and make sure they always happen in concordance with each other.
###Code
# some behavior that I want to implement -> write some __ function __
# top-level function or top-level syntax -> corresponding __
# x + y -> __add__
# init x -> __init__
# repr(x) --> __repr__
# x() -> __call__
class Polynomial:
def __init__(self, *coeffs):
self.coeffs = coeffs
def __repr__(self):
return 'Polynomial(*{!r})'.format(self.coeffs)
def __add__(self, other):
return Polynomial(*(x + y for x, y in zip(self.coeffs, other.coeffs)))
def __len__(self):
return len(self.coeffs)
def __call__(self):
pass
###Output
_____no_output_____
###Markdown
3 Core Patterns to understand object orientation* Protocol view of python* Built-in inheritance protocol (where to go)* Caveats around how object orientation in python works
###Code
p1 = Polynomial(1, 2, 3)
p2 = Polynomial(3, 4, 3)
p1 + p2
len(p1)
###Output
_____no_output_____
###Markdown
Metaclasses
###Code
# File 1 - library.py
class Base:
def food(self):
return 'foo'
# File2 - user.py
assert hasattr(Base, 'foo'), "you broke it, you fool!"
class Derived(Base):
def bar(self):
return self.foo
# File 1 - library.py
class Base:
def foo(self):
return self.bar()
# File2 - user.py
assert hasattr(Base, 'foo'), "you broke it, you fool!"
class Derived(Base):
def bar(self):
return 'bar'
Derived.bar
def _():
class Base:
pass
from dis import dis
dis(_) # LOAD_BUILD_CLASS
# Catch Building of Classes
class Base:
def foo(self):
return self.bar()
old_bc = __build_class__
def my_bc(*a, **kw):
print('my buildclass ->', a, kw)
return old_bc(*a, **kw)
import builtins
builtins.__build_class__ = my_bc
# Catch Building of Classes
class Base:
def foo(self):
return self.bar()
old_bc = __build_class__
def my_bc(fun, name, base=None, **kw):
if base is Base:
print('Check if bar method defined')
if base is not None:
return old_bc(fun, name, base, **kw)
return old_bc(fun, name, **kw)
import builtins
builtins.__build_class__ = my_bc
import builtins
import importlib
importlib.reload(builtins)
class BaseMeta(type):
def __new__(cls, name, bases, body):
print('BaseMeta.__new__', cls, name, bases, body)
return super().__new__(cls, name, bases, body)
class Base(metaclass=BaseMeta):
def foo(self):
return self.bar()
class BaseMeta(type):
def __new__(cls, name, bases, body):
if not 'bar' in body:
raise TypeError('bad user class')
return super().__new__(cls, name, bases, body)
class Base(metaclass=BaseMeta):
def foo(self):
return self.bar()
class BaseMeta(type):
def __new__(cls, name, bases, body):
if name != 'Base' and not 'bar' in body:
raise TypeError('bad user class')
return super().__new__(cls, name, bases, body)
class Base(metaclass=BaseMeta):
def foo(self):
return self.bar()
def __init_subclass__(*a, **kw):
print('init_subclass', a, kw)
return super().__init_subclass__(*a, **kw)
help(Base.__init_subclass__)
###Output
_____no_output_____
###Markdown
Decorators
###Code
# dec.py
def add(x, y=10):
return x + y
add(10, 20)
add
# Name of function
add.__name__
# What module function is assigned to
add.__module__
# Default values
add.__defaults__
# Byte code for function
add.__code__.co_code
# Variable names function interacts with
add.__code__.co_varnames
###Output
_____no_output_____
###Markdown
What's your source code?
###Code
from inspect import getsource
getsource(add)
print(getsource(add))
# What file are you in?
from inspect import getfile
getfile(add)
from inspect import getmodule
getmodule(add)
print('add(10)', add(10))
print('add(20, 30)', add(20, 30))
print('add("a", "b")', add("a", "b"))
#Count how long it took to run
from time import time
def add_timer(x, y=10):
before = time()
rv = x + y
after = time()
print('elapsed:', after - before)
return rv
print('add(10)', add_timer(10))
print('add(20, 30)', add_timer(20, 30))
print('add("a", "b")', add_timer("a", "b"))
###Output
_____no_output_____
###Markdown
But what if we have multiple functions that require timing?
###Code
def sub(x, y=10):
return x - y
print('sub(10)', sub(10))
print('sub(20, 30)', sub(20, 30))
def timer(func, x, y=10):
before = time()
rv = func(x, y)
after = time()
print('elapsed', after - before)
return rv
print('add(10)', timer(add, 10))
print('add(20, 30)', timer(add, 20, 30))
print('add("a", "b"', timer(add, "a", "b"))
from functools import wraps
def timer(func):
@wraps(func) # copying over the function name, docstring, arguments list, etc
def f(x, y=10):
before = time()
rv = func(x, y)
after = time()
print('elapsed', after - before)
return rv
return f
add = timer(add)
add.__name__
print('add(10)', add(10))
print('add(20, 30)', add(20, 30))
print('add("a", "b")', add("a", "b"))
# Don't need to do add = timer(add) with decorators...
@timer
def add_dec(x, y=10):
return x + y
@timer
def sub_dec(x, y=10):
return x - y
print('add(10)', add_dec(10))
print('add(20, 30)', add_dec(20, 30))
print('add("a", "b")', add_dec("a", "b"))
print('sub(10)', sub_dec(10))
print('sub(20, 30)', sub_dec(20, 30))
# Don't hardcode parameters in decorator functions
def timer_k(func):
def f(*args, **kwargs):
before = time()
rv = func(*args, **kwargs)
after = time()
print('elapsed', after - before)
return rv
return f
@timer_k
def add_dec(x, y=10):
return x + y
@timer_k
def sub_dec(x, y=10):
return x - y
print('add(10)', add_dec(10))
print('add(20, 30)', add_dec(20, 30))
print('add("a", "b")', add_dec("a", "b"))
print('sub(10)', sub_dec(10))
print('sub(20, 30)', sub_dec(20, 30))
# What if I want to run a function n number of times
# Let's have add run 3 times in a row and sub run twice in a row
n = 2
def ntimes(f):
def wrapper(*args, **kwargs):
for _ in range(n):
print('running {.__name__}'.format(f))
rv = f(*args, **kwargs)
return rv
return wrapper
@ntimes
def add_dec(x, y=10):
return x + y
@ntimes
def sub_dec(x, y=10):
return x - y
print('add(10)', add_dec(10))
print('add(20, 30)', add_dec(20, 30))
print('add("a", "b")', add_dec("a", "b"))
print('sub(10)', sub_dec(10))
print('sub(20, 30)', sub_dec(20, 30))
###Output
_____no_output_____
###Markdown
Higher Order Decorators
###Code
def ntimes(n):
def inner(f):
def wrapper(*args, **kwargs):
for _ in range(n):
print('running {.__name__}'.format(f))
rv = f(*args, **kwargs)
return rv
return wrapper
return inner
@ntimes(2)
def add_hdec(x, y=10):
return x + y
@ntimes(4)
def sub_hdec(x, y=10):
return x - y
print('add(10)', add_hdec(10))
print('add(20, 30)', add_hdec(20, 30))
print('add("a", "b")', add_hdec("a", "b"))
print('sub(10)', sub_hdec(10))
print('sub(20, 30)', sub_hdec(20, 30))
###Output
_____no_output_____
###Markdown
Generators
###Code
# gen.py - use whenever sequencing is needd
# top-level syntax, function -> underscore method
# x() __call__
def add1(x, y):
return x + y
class Adder:
def __call__(self, x, y):
return x + y
add2 = Adder()
add1(10, 20)
add2(10, 20)
# top-level syntax, function -> underscore method
# x() __call__
def add1(x, y):
return x + y
class Adder:
def __init__(self):
self.z = 0
def __call__(self, x, y):
self.z += 1
return x + y + self.z
add2 = Adder()
from time import sleep
# This example has storage... and has eager return of the result sets
def compute():
rv = []
for i in range(10):
sleep(.5)
rv.append(i)
return rv
compute()
###Output
_____no_output_____
###Markdown
Wasteful because we have to wait for the entire action to complete and be read into memory, when we really just care about each number (one by one) iter(), next()
###Code
# for x in xs:
# pass
# xi = iter(xs) -> __iter__
# while True:
# x = next(xi) -> __next__
class Compute:
def __call__(self):
rv = []
for i in range(100000):
sleep(5)
rv.append(i)
return rv
def __iter__(self):
self.last = 0
return self
def __next__(self):
rv = self.last
self.last += 1
if self.last > 10:
raise StopIteration()
sleep(.5)
return self.last
compute = Compute()
# THIS IS UGLY... now let's make a generator
###Output
_____no_output_____
###Markdown
Generator syntax
###Code
#This is a generator... don't eagerly compute. Return to user as they ask for it...
def compute():
for i in range(10):
sleep(.5)
yield i
for val in compute():
print(val)
###Output
_____no_output_____
###Markdown
Enforcing order to run methods- generator / coroutine: interleaving
###Code
# ask user nicely to run functions in order
class Api:
def run_this_first(self):
first()
def run_this_second(self):
second()
def run_this_last(self):
last()
# Instead, enforcing order by using generator
def api():
print('First')
yield
print('Second')
yield
print('Last')
gen = api()
next(gen)
next(gen)
next(gen, None)
###Output
_____no_output_____
###Markdown
Context Managerhttps://docs.python.org/3/library/contextlib.htmlcontextlib.contextmanager
###Code
# cty.py
from sqlite3 import connect
# with ctx() as x:
# pass
# x = ctx().__enter__
# try:
# pass
# finally:
# x.__exit__
class temptable:
def __init__(self, cur):
self.cur = cur
def __enter__(self):
print('__enter__')
self.cur.execute('create table points(x int, y int)')
def __exit__(self, *args):
print('__exit__')
self.cur.execute('drop table points')
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
for row in cur.execute('select sum(x * y) from points'):
print(row)
rm test.db
###Output
_____no_output_____
###Markdown
Adding a generator- version above can call `exit` before `enter`
###Code
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
yield
cur.execute('drop table points')
print('dropped table')
class contextmanager:
def __init__(self, cur):
self.cur = cur
def __enter__(self):
self.gen = temptable(self.cur)
next(self.gen)
def __exit__(self, *args):
next(self.gen, None)
with connect('test.db') as conn:
cur = conn.cursor()
with contextmanager(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
for row in cur.execute('select sum(x * y) from points'):
print(row)
###Output
_____no_output_____
###Markdown
More general context managertaking a `gen`
###Code
class contextmanager:
def __init__(self, gen):
self.gen = gen
def __call__(self, *args, **kwargs):
self.args, self.kwargs = args, kwargs
return self
def __enter__(self):
self.gen_inst = self.gen(*self.args, **self.kwargs)
next(self.gen_inst)
def __exit__(self, *args):
next(self.gen_inst, None)
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
yield
cur.execute('drop table points')
print('dropped table')
temptable = contextmanager(temptable)
with connect('test.db') as conn:
cur = conn.cursor()
#with contextmanager(temptable)(cur):
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
###Output
_____no_output_____
###Markdown
Adding a Decorator`temptable = contextmanager(temptable)` -> `@contextmanager`
###Code
class contextmanager:
def __init__(self, gen):
self.gen = gen
def __call__(self, *args, **kwargs):
self.args, self.kwargs = args, kwargs
return self
def __enter__(self):
self.gen_inst = self.gen(*self.args, **self.kwargs)
next(self.gen_inst)
def __exit__(self, *args):
next(self.gen_inst, None)
@contextmanager
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
yield
cur.execute('drop table points')
print('dropped table')
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
###Output
_____no_output_____
###Markdown
contextlibdecorator to turn `generator` into `contextmanager`
###Code
from sqlite3 import connect
from contextlib import contextmanager
@contextmanager
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
try:
yield
finally:
cur.execute('drop table points')
print('dropped table')
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
###Output
_____no_output_____
###Markdown
What Does It Take to Be An Expert At Python Notebook based off James Powell's talk at PyData 2017'https://www.youtube.com/watch?v=7lmCu8wz8roIf you want to become an expert in Python, you should definitely watch this PyData talk from James Powell. Video Index* metaclasses: 18:50* metaclasses(explained): 40:40* decorator: 45:20* generator: 1:04:30* context manager: 1:22:37* summary: 1:40:00 **Definitions** Python is a language orientated around protocols - Some behavior or syntax or bytecode or some top level function and there is a way to tell python how to implement that on an arbitrary object via underscore methods. The exact correspondance is usually guessable, but if you can't guess it you can it... google python data model**Metaclass** Mechanism: Some hook into the class construction process. Questions: Do you have these methods implemented. Meaning: Library code & User code? How do you enforce a constraint? **Decorator** Hooks into idea that everything creates a structure at run time. Wrap sets of functions with a before and after behavior. **Generators** Take a single computation that would otherwise run eagerly from the injection of its parameters to the final computation and interleaving with other code by adding yield points where you can yield the intermediate result values or one small piece of the computation and also yield back to the caller. Think of a generator of a way to take one long piece of computation and break it up into small parts. **Context managers** Two structures that allow you to tie two actions together. A setup action and a teardown action and make sure they always happen in concordance with each other.
###Code
# some behavior that I want to implement -> write some __ function __
# top-level function or top-level syntax -> corresponding __
# x + y -> __add__
# init x -> __init__
# repr(x) --> __repr__
# x() -> __call__
class Polynomial:
def __init__(self, *coeffs):
self.coeffs = coeffs
def __repr__(self):
return 'Polynomial(*{!r})'.format(self.coeffs)
def __add__(self, other):
return Polynomial(*(x + y for x, y in zip(self.coeffs, other.coeffs)))
def __len__(self):
return len(self.coeffs)
def __call__(self):
pass
###Output
_____no_output_____
###Markdown
3 Core Patterns to understand object orientation* Protocol view of python* Built-in inheritance protocol (where to go)* Caveats around how object orientation in python works
###Code
p1 = Polynomial(1, 2, 3)
p2 = Polynomial(3, 4, 3)
p1 + p2
len(p1)
###Output
_____no_output_____
###Markdown
Metaclasses
###Code
# File 1 - library.py
class Base:
def food(self):
return 'foo'
# File2 - user.py
assert hasattr(Base, 'foo'), "you broke it, you fool!"
class Derived(Base):
def bar(self):
return self.foo
# File 1 - library.py
class Base:
def foo(self):
return self.bar()
# File2 - user.py
assert hasattr(Base, 'foo'), "you broke it, you fool!"
class Derived(Base):
def bar(self):
return 'bar'
Derived.bar
def _():
class Base:
pass
from dis import dis
dis(_) # LOAD_BUILD_CLASS
# Catch Building of Classes
class Base:
def foo(self):
return self.bar()
old_bc = __build_class__
def my_bc(*a, **kw):
print('my buildclass ->', a, kw)
return old_bc(*a, **kw)
import builtins
builtins.__build_class__ = my_bc
# Catch Building of Classes
class Base:
def foo(self):
return self.bar()
old_bc = __build_class__
def my_bc(fun, name, base=None, **kw):
if base is Base:
print('Check if bar method defined')
if base is not None:
return old_bc(fun, name, base, **kw)
return old_bc(fun, name, **kw)
import builtins
builtins.__build_class__ = my_bc
import builtins
import importlib
importlib.reload(builtins)
class BaseMeta(type):
def __new__(cls, name, bases, body):
print('BaseMeta.__new__', cls, name, bases, body)
return super().__new__(cls, name, bases, body)
class Base(metaclass=BaseMeta):
def foo(self):
return self.bar()
class BaseMeta(type):
def __new__(cls, name, bases, body):
if not 'bar' in body:
raise TypeError('bad user class')
return super().__new__(cls, name, bases, body)
class Base(metaclass=BaseMeta):
def foo(self):
return self.bar()
class BaseMeta(type):
def __new__(cls, name, bases, body):
if name != 'Base' and not 'bar' in body:
raise TypeError('bad user class')
return super().__new__(cls, name, bases, body)
class Base(metaclass=BaseMeta):
def foo(self):
return self.bar()
def __init_subclass__(*a, **kw):
print('init_subclass', a, kw)
return super().__init_subclass__(*a, **kw)
help(Base.__init_subclass__)
###Output
Help on method __init_subclass__ in module __main__:
__init_subclass__(*a, **kw) method of __main__.BaseMeta instance
This method is called when a class is subclassed.
The default implementation does nothing. It may be
overridden to extend subclasses.
###Markdown
Decorators
###Code
# dec.py
def add(x, y=10):
return x + y
add(10, 20)
add
# Name of function
add.__name__
# What module function is assigned to
add.__module__
# Default values
add.__defaults__
# Byte code for function
add.__code__.co_code
# Variable names function interacts with
add.__code__.co_varnames
###Output
_____no_output_____
###Markdown
What's your source code?
###Code
from inspect import getsource
getsource(add)
print(getsource(add))
# What file are you in?
from inspect import getfile
getfile(add)
from inspect import getmodule
getmodule(add)
print('add(10)', add(10))
print('add(20, 30)', add(20, 30))
print('add("a", "b")', add("a", "b"))
#Count how long it took to run
def add_timer(x, y=10):
before = time()
rv = x + y
after = time()
print('elapsed:', after - before)
return rv
print('add(10)', add_timer(10))
print('add(20, 30)', add_timer(20, 30))
print('add("a", "b")', add_timer("a", "b"))
###Output
elapsed: 0.0
add(10) 20
elapsed: 9.5367431640625e-07
add(20, 30) 50
elapsed: 9.5367431640625e-07
add("a", "b") ab
###Markdown
But what if we have multiple functions that require timing?
###Code
def sub(x, y=10):
return x - y
print('sub(10)', sub(10))
print('sub(20, 30)', sub(20, 30))
def timer(func, x, y=10):
before = time()
rv = func(x, y)
after = time()
print('elapsed', after - before)
return rv
print('add(10)', timer(add, 10))
print('add(20, 30)', timer(add, 20, 30))
print('add("a", "b")', timer(add, "a", "b"))
def timer(func):
def f(x, y=10):
before = time()
rv = func(x, y)
after = time()
print('elapsed', after - before)
return rv
return f
add = timer(add)
print('add(10)', add(10))
print('add(20, 30)', add(20, 30))
print('add("a", "b")', add("a", "b"))
# Don't need to do add = timer(add) with decorators...
@timer
def add_dec(x, y=10):
return x + y
@timer
def sub_dec(x, y=10):
return x - y
print('add(10)', add_dec(10))
print('add(20, 30)', add_dec(20, 30))
print('add("a", "b")', add_dec("a", "b"))
print('sub(10)', sub_dec(10))
print('sub(20, 30)', sub_dec(20, 30))
# Don't hardcode parameters in decorator functions
def timer_k(func):
def f(*args, **kwargs):
before = time()
rv = func(*args, **kwargs)
after = time()
print('elapsed', after - before)
return rv
return f
@timer_k
def add_dec(x, y=10):
return x + y
@timer_k
def sub_dec(x, y=10):
return x - y
print('add(10)', add_dec(10))
print('add(20, 30)', add_dec(20, 30))
print('add("a", "b")', add_dec("a", "b"))
print('sub(10)', sub_dec(10))
print('sub(20, 30)', sub_dec(20, 30))
# What if I want to run a function n number of times
# Let's have add run 3 times in a row and sub run twice in a row
n = 2
def ntimes(f):
def wrapper(*args, **kwargs):
for _ in range(n):
print('running {.__name__}'.format(f))
rv = f(*args, **kwargs)
return rv
return wrapper
@ntimes
def add_dec(x, y=10):
return x + y
@ntimes
def sub_dec(x, y=10):
return x - y
print('add(10)', add_dec(10))
print('add(20, 30)', add_dec(20, 30))
print('add("a", "b")', add_dec("a", "b"))
print('sub(10)', sub_dec(10))
print('sub(20, 30)', sub_dec(20, 30))
###Output
running add_dec
running add_dec
add(10) 20
running add_dec
running add_dec
add(20, 30) 50
running add_dec
running add_dec
add("a", "b") ab
running sub_dec
running sub_dec
sub(10) 0
running sub_dec
running sub_dec
sub(20, 30) -10
###Markdown
Higher Order Decorators
###Code
def ntimes(n):
def inner(f):
def wrapper(*args, **kwargs):
for _ in range(n):
print('running {.__name__}'.format(f))
rv = f(*args, **kwargs)
return rv
return wrapper
return inner
@ntimes(2)
def add_hdec(x, y=10):
return x + y
@ntimes(4)
def sub_hdec(x, y=10):
return x - y
print('add(10)', add_hdec(10))
print('add(20, 30)', add_hdec(20, 30))
print('add("a", "b")', add_hdec("a", "b"))
print('sub(10)', sub_hdec(10))
print('sub(20, 30)', sub_hdec(20, 30))
###Output
running add_hdec
running add_hdec
add(10) 20
running add_hdec
running add_hdec
add(20, 30) 50
running add_hdec
running add_hdec
add("a", "b") ab
running sub_hdec
running sub_hdec
running sub_hdec
running sub_hdec
sub(10) 0
running sub_hdec
running sub_hdec
running sub_hdec
running sub_hdec
sub(20, 30) -10
###Markdown
Generators
###Code
# gen.py - use whenever sequencing is needd
# top-level syntax, function -> underscore method
# x() __call__
def add1(x, y):
return x + y
class Adder:
def __call__(self, x, y):
return x + y
add2 = Adder()
add1(10, 20)
add2(10, 20)
# top-level syntax, function -> underscore method
# x() __call__
def add1(x, y):
return x + y
class Adder:
def __init__(self):
self.z = 0
def __call__(self, x, y):
self.z += 1
return x + y + self.z
add2 = Adder()
from time import sleep
# This example has storage... and has eager return of the result sets
def compute():
rv = []
for i in range(10):
sleep(.5)
rv.append(i)
return rv
compute()
###Output
_____no_output_____
###Markdown
Wasteful because we have to wait for the entire action to complete and be read into memory, when we really just care about each number (one by one)
###Code
class Compute:
def __call__(self):
rv = []
for i in range(100000):
sleep(5)
rv.append(i)
return rv
def __iter__(self):
self.last = 0
return self
def __next__(self):
rv = self.last
self.last += 1
if self.last > 10:
raise StopIteration()
sleep(.5)
return self.last
compute = Compute()
# THIS IS UGLY... now let's make a generator
#This is a generator... don't eagerly compute. Return to user as they ask for it...
def compute():
for i in range(10):
sleep(.5)
yield i
# for x in xs:
# pass
# xi = iter(xs) -> __iter__
# while True:
# x = next(xi) -> __next__
for val in compute():
print(val)
class Api:
def run_this_first(self):
first()
def run_this_second(self):
second()
def run_this_last(self):
last()
def api():
first()
yield
second()
yield
last()
###Output
_____no_output_____
###Markdown
Context Manager
###Code
# cty.py
from sqlite3 import connect
# with ctx() as x:
# pass
# x = ctx().__enter__
# try:
# pass
# finally:
# x.__exit__
class temptable:
def __init__(self, cur):
self.cur = cur
def __enter__(self):
print('__enter__')
self.cur.execute('create table points(x int, y int)')
def __exit__(self, *args):
print('__exit__')
self.cur.execute('drop table points')
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
for row in cur.execute('select sum(x * y) from points'):
print(row)
rm test.db
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
yield
cur.execute('drop table points')
print('dropped table')
class contextmanager:
def __init__(self, cur):
self.cur = cur
def __enter__(self):
self.gen = temptable(self.cur)
next(self.gen)
def __exit__(self, *args):
next(self.gen, None)
with connect('test.db') as conn:
cur = conn.cursor()
with contextmanager(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
for row in cur.execute('select sum(x * y) from points'):
print(row)
class contextmanager:
def __init__(self, gen):
self.gen = gen
def __call__(self, *args, **kwargs):
self.args, self.kwargs = args, kwargs
return self
def __enter__(self):
self.gen_inst = self.gen(*self.args, **self.kwargs)
next(self.gen_inst)
def __exit__(self, *args):
next(self.gen_inst, None)
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
yield
cur.execute('drop table points')
print('dropped table')
temptable = contextmanager(temptable)
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
class contextmanager:
def __init__(self, gen):
self.gen = gen
def __call__(self, *args, **kwargs):
self.args, self.kwargs = args, kwargs
return self
def __enter__(self):
self.gen_inst = self.gen(*self.args, **self.kwargs)
next(self.gen_inst)
def __exit__(self, *args):
next(self.gen_inst, None)
@contextmanager
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
yield
cur.execute('drop table points')
print('dropped table')
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
from sqlite3 import connect
from contextlib import contextmanager
@contextmanager
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
try:
yield
finally:
cur.execute('drop table points')
print('dropped table')
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
###Output
created table
(1, 1)
(1, 2)
(2, 1)
(2, 2)
dropped table
###Markdown
What Does It Take to Be An Expert At Python Notebook based off James Powell's talk at PyData 2017'https://www.youtube.com/watch?v=7lmCu8wz8roIf you want to become an expert in Python, you should definitely watch this PyData talk from James Powell. Video Index* metaclasses: 18:50* metaclasses(explained): 40:40* decorator: 45:20* generator: 1:04:30* context manager: 1:22:37* summary: 1:40:00 **Definitions** Python is a language orientated around protocols - Some behavior or syntax or bytecode or some top level function and there is a way to tell python how to implement that on an arbitrary object via underscore methods. The exact correspondance is usually guessable, but if you can't guess it you can it... google python data model**Metaclass** Mechanism: Some hook into the class construction process. Questions: Do you have these methods implemented. Meaning: Library code & User code? How do you enforce a constraint? **Decorator** Hooks into idea that everything creates a structure at run time. Wrap sets of functions with a before and after behavior. **Generators** Take a single computation that would otherwise run eagerly from the injection of its parameters to the final computation and interleaving with other code by adding yield points where you can yield the intermediate result values or one small piece of the computation and also yield back to the caller. Think of a generator of a way to take one long piece of computation and break it up into small parts. **Context managers** Two structures that allow you to tie two actions together. A setup action and a teardown action and make sure they always happen in concordance with each other.
###Code
# some behavior that I want to implement -> write some __ function __
# top-level function or top-level syntax -> corresponding __
# x + y -> __add__
# init x -> __init__
# repr(x) --> __repr__
# x() -> __call__
class Polynomial:
def __init__(self, *coeffs):
self.coeffs = coeffs
def __repr__(self):
return 'Polynomial(*{!r})'.format(self.coeffs)
def __add__(self, other):
return Polynomial(*(x + y for x, y in zip(self.coeffs, other.coeffs)))
def __len__(self):
return len(self.coeffs)
def __call__(self):
pass
###Output
_____no_output_____
###Markdown
3 Core Patterns to understand object orientation* Protocol view of python* Built-in inheritance protocol (where to go)* Caveats around how object orientation in python works
###Code
p1 = Polynomial(1, 2, 3)
p2 = Polynomial(3, 4, 3)
p1 + p2
len(p1)
###Output
_____no_output_____
###Markdown
Metaclasses
###Code
# File 1 - library.py
class Base:
def food(self):
return 'foo'
# File2 - user.py
assert hasattr(Base, 'foo'), "you broke it, you fool!"
class Derived(Base):
def bar(self):
return self.foo
# File 1 - library.py
class Base:
def foo(self):
return self.bar()
# File2 - user.py
assert hasattr(Base, 'foo'), "you broke it, you fool!"
class Derived(Base):
def bar(self):
return 'bar'
Derived.bar
def _():
class Base:
pass
from dis import dis
dis(_) # LOAD_BUILD_CLASS
# Catch Building of Classes
class Base:
def foo(self):
return self.bar()
old_bc = __build_class__
def my_bc(*a, **kw):
print('my buildclass ->', a, kw)
return old_bc(*a, **kw)
import builtins
builtins.__build_class__ = my_bc
# Catch Building of Classes
class Base:
def foo(self):
return self.bar()
old_bc = __build_class__
def my_bc(fun, name, base=None, **kw):
if base is Base:
print('Check if bar method defined')
if base is not None:
return old_bc(fun, name, base, **kw)
return old_bc(fun, name, **kw)
import builtins
builtins.__build_class__ = my_bc
import builtins
import importlib
importlib.reload(builtins)
class BaseMeta(type):
def __new__(cls, name, bases, body):
print('BaseMeta.__new__', cls, name, bases, body)
return super().__new__(cls, name, bases, body)
class Base(metaclass=BaseMeta):
def foo(self):
return self.bar()
class BaseMeta(type):
def __new__(cls, name, bases, body):
if not 'bar' in body:
raise TypeError('bad user class')
return super().__new__(cls, name, bases, body)
class Base(metaclass=BaseMeta):
def foo(self):
return self.bar()
class BaseMeta(type):
def __new__(cls, name, bases, body):
if name != 'Base' and not 'bar' in body:
raise TypeError('bad user class')
return super().__new__(cls, name, bases, body)
class Base(metaclass=BaseMeta):
def foo(self):
return self.bar()
def __init_subclass__(*a, **kw):
print('init_subclass', a, kw)
return super().__init_subclass__(*a, **kw)
help(Base.__init_subclass__)
###Output
Help on method __init_subclass__ in module __main__:
__init_subclass__(*a, **kw) method of __main__.BaseMeta instance
This method is called when a class is subclassed.
The default implementation does nothing. It may be
overridden to extend subclasses.
###Markdown
Decorators
###Code
# dec.py
def add(x, y=10):
return x + y
add(10, 20)
add
# Name of function
add.__name__
# What module function is assigned to
add.__module__
# Default values
add.__defaults__
# Byte code for function
add.__code__.co_code
# Variable names function interacts with
add.__code__.co_varnames
###Output
_____no_output_____
###Markdown
What's your source code?
###Code
from inspect import getsource
getsource(add)
print(getsource(add))
# What file are you in?
from inspect import getfile
getfile(add)
from inspect import getmodule
getmodule(add)
print('add(10)', add(10))
print('add(20, 30)', add(20, 30))
print('add("a", "b")', add("a", "b"))
#Count how long it took to run
def add_timer(x, y=10):
before = time()
rv = x + y
after = time()
print('elapsed:', after - before)
return rv
print('add(10)', add_timer(10))
print('add(20, 30)', add_timer(20, 30))
print('add("a", "b")', add_timer("a", "b"))
###Output
elapsed: 0.0
add(10) 20
elapsed: 9.5367431640625e-07
add(20, 30) 50
elapsed: 9.5367431640625e-07
add("a", "b") ab
###Markdown
But what if we have multiple functions that require timing?
###Code
def sub(x, y=10):
return x - y
print('sub(10)', sub(10))
print('sub(20, 30)', sub(20, 30))
def timer(func, x, y=10):
before = time()
rv = func(x, y)
after = time()
print('elapsed', after - before)
return rv
print('add(10)', timer(add, 10))
print('add(20, 30)', timer(add, 20, 30))
print('add("a", "b")', timer(add, "a", "b"))
def timer(func):
def f(x, y=10):
before = time()
rv = func(x, y)
after = time()
print('elapsed', after - before)
return rv
return f
add = timer(add)
print('add(10)', add(10))
print('add(20, 30)', add(20, 30))
print('add("a", "b")', add("a", "b"))
# Don't need to do add = timer(add) with decorators...
@timer
def add_dec(x, y=10):
return x + y
@timer
def sub_dec(x, y=10):
return x - y
print('add(10)', add_dec(10))
print('add(20, 30)', add_dec(20, 30))
print('add("a", "b")', add_dec("a", "b"))
print('sub(10)', sub_dec(10))
print('sub(20, 30)', sub_dec(20, 30))
# Don't hardcode parameters in decorator functions
def timer_k(func):
def f(*args, **kwargs):
before = time()
rv = func(*args, **kwargs)
after = time()
print('elapsed', after - before)
return rv
return f
@timer_k
def add_dec(x, y=10):
return x + y
@timer_k
def sub_dec(x, y=10):
return x - y
print('add(10)', add_dec(10))
print('add(20, 30)', add_dec(20, 30))
print('add("a", "b")', add_dec("a", "b"))
print('sub(10)', sub_dec(10))
print('sub(20, 30)', sub_dec(20, 30))
# What if I want to run a function n number of times
# Let's have add run 3 times in a row and sub run twice in a row
n = 2
def ntimes(f):
def wrapper(*args, **kwargs):
for _ in range(n):
print('running {.__name__}'.format(f))
rv = f(*args, **kwargs)
return rv
return wrapper
@ntimes
def add_dec(x, y=10):
return x + y
@ntimes
def sub_dec(x, y=10):
return x - y
print('add(10)', add_dec(10))
print('add(20, 30)', add_dec(20, 30))
print('add("a", "b")', add_dec("a", "b"))
print('sub(10)', sub_dec(10))
print('sub(20, 30)', sub_dec(20, 30))
###Output
running add_dec
running add_dec
add(10) 20
running add_dec
running add_dec
add(20, 30) 50
running add_dec
running add_dec
add("a", "b") ab
running sub_dec
running sub_dec
sub(10) 0
running sub_dec
running sub_dec
sub(20, 30) -10
###Markdown
Higher Order Decorators
###Code
def ntimes(n):
def inner(f):
def wrapper(*args, **kwargs):
for _ in range(n):
print('running {.__name__}'.format(f))
rv = f(*args, **kwargs)
return rv
return wrapper
return inner
@ntimes(2)
def add_hdec(x, y=10):
return x + y
@ntimes(4)
def sub_hdec(x, y=10):
return x - y
print('add(10)', add_hdec(10))
print('add(20, 30)', add_hdec(20, 30))
print('add("a", "b")', add_hdec("a", "b"))
print('sub(10)', sub_hdec(10))
print('sub(20, 30)', sub_hdec(20, 30))
###Output
running add_hdec
running add_hdec
add(10) 20
running add_hdec
running add_hdec
add(20, 30) 50
running add_hdec
running add_hdec
add("a", "b") ab
running sub_hdec
running sub_hdec
running sub_hdec
running sub_hdec
sub(10) 0
running sub_hdec
running sub_hdec
running sub_hdec
running sub_hdec
sub(20, 30) -10
###Markdown
Generators
###Code
# gen.py - use whenever sequencing is needd
# top-level syntax, function -> underscore method
# x() __call__
def add1(x, y):
return x + y
class Adder:
def __call__(self, x, y):
return x + y
add2 = Adder()
add1(10, 20)
add2(10, 20)
# top-level syntax, function -> underscore method
# x() __call__
def add1(x, y):
return x + y
class Adder:
def __init__(self):
self.z = 0
def __call__(self, x, y):
self.z += 1
return x + y + self.z
add2 = Adder()
from time import sleep
# This example has storage... and has eager return of the result sets
def compute():
rv = []
for i in range(10):
sleep(.5)
rv.append(i)
return rv
compute()
###Output
_____no_output_____
###Markdown
Wasteful because we have to wait for the entire action to complete and be read into memory, when we really just care about each number (one by one)
###Code
class Compute:
def __call__(self):
rv = []
for i in range(100000):
sleep(5)
rv.append(i)
return rv
def __iter__(self):
self.last = 0
return self
def __next__(self):
rv = self.last
self.last += 1
if self.last > 10:
raise StopIteration()
sleep(.5)
return self.last
compute = Compute()
# THIS IS UGLY... now let's make a generator
#This is a generator... don't eagerly compute. Return to user as they ask for it...
def compute():
for i in range(10):
sleep(.5)
yield i
# for x in xs:
# pass
# xi = iter(xs) -> __iter__
# while True:
# x = next(xi) -> __next__
for val in compute():
print(val)
class Api:
def run_this_first(self):
first()
def run_this_second(self):
second()
def run_this_last(self):
last()
def api():
first()
yield
second()
yield
last()
###Output
_____no_output_____
###Markdown
Context Manager
###Code
# cty.py
from sqlite3 import connect
# with ctx() as x:
# pass
# x = ctx().__enter__
# try:
# pass
# finally:
# x.__exit__
class temptable:
def __init__(self, cur):
self.cur = cur
def __enter__(self):
print('__enter__')
self.cur.execute('create table points(x int, y int)')
def __exit__(self, *args):
print('__exit__')
self.cur.execute('drop table points')
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
for row in cur.execute('select sum(x * y) from points'):
print(row)
rm test.db
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
yield
cur.execute('drop table points')
print('dropped table')
class contextmanager:
def __init__(self, cur):
self.cur = cur
def __enter__(self):
self.gen = temptable(self.cur)
next(self.gen)
def __exit__(self, *args):
next(self.gen, None)
with connect('test.db') as conn:
cur = conn.cursor()
with contextmanager(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
for row in cur.execute('select sum(x * y) from points'):
print(row)
class contextmanager:
def __init__(self, gen):
self.gen = gen
def __call__(self, *args, **kwargs):
self.args, self.kwargs = args, kwargs
return self
def __enter__(self):
self.gen_inst = self.gen(*self.args, **self.kwargs)
next(self.gen_inst)
def __exit__(self, *args):
next(self.gen_inst, None)
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
yield
cur.execute('drop table points')
print('dropped table')
temptable = contextmanager(temptable)
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
class contextmanager:
def __init__(self, gen):
self.gen = gen
def __call__(self, *args, **kwargs):
self.args, self.kwargs = args, kwargs
return self
def __enter__(self):
self.gen_inst = self.gen(*self.args, **self.kwargs)
next(self.gen_inst)
def __exit__(self, *args):
next(self.gen_inst, None)
@contextmanager
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
yield
cur.execute('drop table points')
print('dropped table')
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
from sqlite3 import connect
from contextlib import contextmanager
@contextmanager
def temptable(cur):
cur.execute('create table points(x int, y int)')
print('created table')
try:
yield
finally:
cur.execute('drop table points')
print('dropped table')
with connect('test.db') as conn:
cur = conn.cursor()
with temptable(cur):
cur.execute('insert into points (x, y) values(1, 1)')
cur.execute('insert into points (x, y) values(1, 2)')
cur.execute('insert into points (x, y) values(2, 1)')
cur.execute('insert into points (x, y) values(2, 2)')
for row in cur.execute("select x, y from points"):
print(row)
###Output
created table
(1, 1)
(1, 2)
(2, 1)
(2, 2)
dropped table
|
Assignmnet1_Siena_Vertudes.ipynb
|
###Markdown
--- **Welcome to Python Fundamentals**------In this module, we are going to establish or review our skills in Python programming. In this notebook we are going to cover:* *Variables and Data Types** *Operations** *Input and Output Operations** *Logic Control** *Iterables** *Functions*------ **Variables and Data Types**------> **Variables** are the identifiers of the values or data assigned to them. You may also refer to them as the "nouns" of the programming language because they are the placeholders for their respective values. Every variable has a data type, which affects how they interact with other variables and operators.---> **Data Types** are the different categories of data. They represent whether a value is numerical or a string value, including non-numerical characters. Boolean values are also included here, representing whether a statement is true or false.The different data types are as follows:* Integers* Floats* Strings* Booleans (*True* and *False*)------
###Code
x=1
y,z= 0,-2
y
type(x)
r = 2.0
type(r)
t = float(x)
type(t)
s,k,u = "0", '1', 'one'
type(u)
k_int = int(k)
k_int
type(k)
###Output
_____no_output_____
###Markdown
**Operations**------> These are the set of characters which decide the program's course of action. The operations to be discussed are as follows:* *Arithmetic Operations** *Assignment Operations** *Comparators** *Logical Operations** *Input/Output Operations*------ ---**Arithmetic**> Arithmetic Operators are used with numerical values in order to perform arithmetic, or common mathematical operations.---
###Code
a,b,c,d = 5,6,7,8
#Addition
S = a+b
S
#Subtraction
M = c-a
M
#Multiplication
X = a*d
X
#Division
H = c/a
H
#Floor Division
HD = d//b
HD
#Exponentiation
E = b**a
E
#Modulo
MOD = c%b
MOD
###Output
_____no_output_____
###Markdown
---**Assignment Operations**> Assignment Operators are used to assign values to variables or to change already existing values. These operations could also be used in logical operations.---
###Code
I, J , K , L = 0, 100, 2, 2
J += L
J
H -= I
H
J *= 3
J
K **= 3
K
###Output
_____no_output_____
###Markdown
---**Comparators**> Comparators are the operations used to compare two values of the same data type. These operations return a *boolean* value, which is either *true* or *false*.---
###Code
res_1, res_2, res_3 = 1,2.0,"1"
true_val =1.0
## Equality
res_1 == true_val
## Non-equality
res_2 != true_val
## Inequality
t1 = res_1 > res_2
t2 = res_1 < res_2/2
t3 = res_1 >= res_2/2
t4 = res_1 <= res_2
t1
###Output
_____no_output_____
###Markdown
---**Logical**> Logical Operators are used to create conditional statements. These statements return a *boolean* value, which is either *true* or *false*, depending on the values given and the operator used.---
###Code
res_1 == true_val
res_1 is true_val
res_1 is not true_val
P,Q = True, False
conj = P and Q
conj
P, Q = True, False
disj = P or Q
disj
P, Q = True, False
nand = not(P and Q)
nand
P, Q= True, False
xor = (not P and Q) or (P and not Q)
xor
###Output
_____no_output_____
###Markdown
---**I/O: Input/Output**> The previous methods of defining variables has been to input the value into the code itself, but Input and Output operations allows for user interaction with the program itself. Through the *input()* statement, the user may assign any character to a variable. The *.format()* statement allows the user to make their output appear less cluttered.---
###Code
print("Hello World")
cnt = 1
string = "Hello World"
print (string, ", Current run count is:", cnt)
cnt += 1
print(f"{string}, Current count is: {cnt}")
sem_grade = 82.243564657746123
name = "Ja"
print("Hello {}, your semestral grade is: {}".format(name, sem_grade))
pre,mid,fin = 0.25, 0.25, 0.5
print("The weights of your semestral grade are:\
\n\t{:.2%} for Prelims,\
\n\t{:.2%} for Midterms, and\
\n\t{:.2%} for Finals.".format(pre,mid,fin))
q= input("Enter your name: ")
q
name = input ("Your name: ")
pr = input("Enter Prelim: ")
md = input("Enter Midterm: ")
fn = input("Enter Final: ")
sg = 85
print("Hello {}, your semestral grade is: {}".format(name,sg))
###Output
Your name: Janus
Enter Prelim: 80.67
Enter Midterm: 80.78
Enter Final: 90.87
Hello Janus, your semestral grade is: 85
###Markdown
**Looping Statements**------> Looping Statements contain directions that repeat continuously until the statement's condition is achieved. ------ ---**While**> While Statements are a type of looping statement which *FIRST CHECKS* the condition before performing its instructions. It will continuously perform the instructions until the condition is proven *FALSE*.---
###Code
## While Looping
s , k = 1,5
while(s<=k):
print(f"{s}\t|\t{k}")
s+=1
###Output
1 | 5
2 | 5
3 | 5
4 | 5
5 | 5
###Markdown
---**For**> For Loop Statements are used to perform a statement for each item or data in a set or range.---
###Code
# for (int p=0; p<10; i++){
# printf(p)}
p=0
for p in range(11):
print (p)
playlist = ["소리꾼", "기도", "All Too Well"]
print('Now Playing:\n')
for song in playlist:
print(song)
###Output
Now Playing:
소리꾼
기도
All Too Well
###Markdown
**Flow Control**------> Flow Control is used to determine the sequence in which the program's statements run. This is either sequential or conditional.------ ---**Condition Statements**> Conditional Statements includes the *if()*, *elif()*, and *else()* statements. The program checks the conditions for each statement and if the value is determined to be *TRUE*, then the program proceeds to perform that specific condition. If determined as *FALSE*, then the program moves on the to the next statement.---
###Code
num1, num2 = 15,1
if(num1<num2):
print("Yes")
elif(num1>num2):
print("Hehe")
else:
print("eme")
###Output
Hehe
###Markdown
---**Function**> Function are used to complete specific directions only when called upon. Data can be defined within a function, which can be returned as a result of the program. ---
###Code
# void DeleteUser(int username){
# delete(username); }
def delete_u (username):
print("Finished Deleting user: {}".format(username))
def delete_allu():
print("Finished Deleting all users")
username = 20201000
delete_u(20201000)
delete_allu()
def add(nume1, nume2):
print("I know how to add nume1 and nume2")
return nume1 + nume2
def power_base2(exponent):
return 2**exponent
nume1 = 10
nume2 = 5
exponent=5
#add(nume1,nume2)
power_base2(exponent)
add(nume1, nume2)
###Output
I know how to add nume1 and nume2
###Markdown
------**Lambda Functions**Lambda functions are a specific type of function in which it is "anonymous", meaning it does not have a defined variable name. Lambda functions are typically used in high-order functions, which are functions that take in other functions as arguments.------ ```Activity & Lab Report```***Grade Calculator***---Create a **grade calculator** that computes for the semestral grade of a course.Students could *type their names*, the *name of the course, then their prelim, midterm, and final grade*.The program should *print the semestral grade in 2 decimal points* and should display the folowing emojis depending on the situation:**happy** - when *grade is greater than 70.00***laughing** - when *grade is exactly 70.00***sad** - when *grade is below 70.00*---**USE THESE EMOJIS:**Happy:```"\U0001F600"```LOL:```"\U0001F606"```Sad:```"\U0001F62D"```------
###Code
"\U0001F600 "
"\U0001F62D "
"\U0001F606 "
###Output
_____no_output_____
###Markdown
---
###Code
name = input("Student Name [INPUT FULL NAME]: ")
course = input("Course: ")
print()
Prelims = float(input("Prelim Grade: "))
Midterms = float(input("Midterm Grade: "))
Finals = float(input("Final Grade: "))
Total = (Prelims) + (Midterms) + (Finals)
Sem = Total/3
sem_f= "{:.2f}".format(Sem)
print()
print("Student: {}".format(name))
print("Course: {}".format(course))
print("Your Semestral Grade is {}".format(sem_f))
if Sem > 70.00:
print("\U0001F600")
elif Sem == 70.00:
print("\U0001F606")
else:
print("\U0001F62D")
###Output
Student Name [INPUT FULL NAME]: Lee Felix
Course: Linear Algebra
Prelim Grade: 90
Midterm Grade: 78
Final Grade: 89.88
Student: Lee Felix
Course: Linear Algebra
Your Semestral Grade is 85.96
😀
|
0_09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
|
###Markdown
Apple StockCheck out [Apple Stock Exercises Video Tutorial](https://youtu.be/wpXkR_IZcug) to watch a data scientist go through the exercises Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
# visualization
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv) Step 3. Assign it to a variable apple
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/09_Time_Series/Apple_Stock/appl_1980_2014.csv'
apple = pd.read_csv(url)
apple.head()
###Output
_____no_output_____
###Markdown
Step 4. Check out the type of the columns
###Code
apple.dtypes
###Output
_____no_output_____
###Markdown
Step 5. Transform the Date column as a datetime type
###Code
apple.Date = pd.to_datetime(apple.Date)
apple['Date'].head()
###Output
_____no_output_____
###Markdown
Step 6. Set the date as the index
###Code
apple = apple.set_index('Date')
apple.head()
###Output
_____no_output_____
###Markdown
Step 7. Is there any duplicate dates?
###Code
# NO! All are unique
apple.index.is_unique
###Output
_____no_output_____
###Markdown
Step 8. Ops...it seems the index is from the most recent date. Make the first entry the oldest date.
###Code
apple.sort_index(ascending = True).head()
###Output
_____no_output_____
###Markdown
Step 9. Get the last business day of each month
###Code
apple_month = apple.resample('BM').mean()
apple_month.head()
###Output
_____no_output_____
###Markdown
Step 10. What is the difference in days between the first day and the oldest
###Code
(apple.index.max() - apple.index.min()).days
###Output
_____no_output_____
###Markdown
Step 11. How many months in the data we have?
###Code
apple_months = apple.resample('BM').mean()
len(apple_months.index)
###Output
_____no_output_____
###Markdown
Step 12. Plot the 'Adj Close' value. Set the size of the figure to 13.5 x 9 inches
###Code
# makes the plot and assign it to a variable
appl_open = apple['Adj Close'].plot(title = "Apple Stock")
# changes the size of the graph
fig = appl_open.get_figure()
fig.set_size_inches(13.5, 9)
###Output
_____no_output_____
|
tf.version.1/04.rnn/03.04.sequence.classification.biRNN.ipynb
|
###Markdown
Sequence classification by bi-directional RNN* Creating the **data pipeline** with `tf.data`* Preprocessing word sequences (variable input sequence length) using `tf.keras.preprocessing`* Using `tf.nn.embedding_lookup` for getting vector of tokens (eg. word, character)* Creating the model as **Class*** Reference * https://github.com/golbin/TensorFlow-Tutorials/blob/master/10%20-%20RNN/02%20-%20Autocomplete.py * https://github.com/aisolab/TF_code_examples_for_Deep_learning/blob/master/Tutorial%20of%20implementing%20Sequence%20classification%20with%20RNN%20series.ipynb
###Code
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import time
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import clear_output
import tensorflow as tf
slim = tf.contrib.slim
rnn = tf.contrib.rnn
tf.logging.set_verbosity(tf.logging.INFO)
sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))
os.environ["CUDA_VISIBLE_DEVICES"]="0"
from tensorflow.python.keras.preprocessing.text import Tokenizer
from tensorflow.python.keras.preprocessing.sequence import pad_sequences
###Output
_____no_output_____
###Markdown
Prepare example data
###Code
x_train_words = ['good', 'bad', 'amazing', 'so good', 'bull shit',
'awesome', 'how dare', 'very much', 'nice', 'god damn it',
'very very very happy', 'what the fuck']
y_train = np.array([0, 1, 0, 0, 1,
0, 1, 0, 0, 1,
0, 1], dtype=np.int32)
# positive sample
index = 0
print("word: {}\nlabel: {}".format(x_train_words[index], y_train[index]))
# negative sample
index = 1
print("word: {}\nlabel: {}".format(x_train_words[index], y_train[index]))
###Output
word: bad
label: 1
###Markdown
Tokenizer
###Code
tokenizer = Tokenizer(char_level=True)
%%time
tokenizer.fit_on_texts(x_train_words)
num_chars = len(tokenizer.word_index) + 1
print("number of characters: {}".format(num_chars))
tokenizer.word_index
x_train_tokens = tokenizer.texts_to_sequences(x_train_words)
index = 2
print("text: {}".format(x_train_words[index]))
print("token: {}".format(x_train_tokens[index]))
x_train_seq_length = np.array([len(tokens) for tokens in x_train_tokens], dtype=np.int32)
num_seq_length = x_train_seq_length
max_seq_length = np.max(num_seq_length)
print(max_seq_length)
###Output
20
###Markdown
Create pad_seq data
###Code
#pad = 'pre'
pad = 'post'
x_train_pad = pad_sequences(sequences=x_train_tokens, maxlen=max_seq_length,
padding=pad, truncating=pad)
index = 7
print("text: {}\n".format(x_train_words[index]))
print("token: {}\n".format(x_train_tokens[index]))
print("pad: {}".format(x_train_pad[index]))
###Output
text: very much
token: [13, 2, 7, 8, 1, 10, 16, 18, 6]
pad: [13 2 7 8 1 10 16 18 6 0 0 0 0 0 0 0 0 0 0 0]
###Markdown
Tokenizer Inverse Map
###Code
idx = tokenizer.word_index
inverse_map = dict(zip(idx.values(), idx.keys()))
print(inverse_map)
def tokens_to_string(tokens):
# Map from tokens back to words.
words = [inverse_map[token] for token in tokens if token != 0]
# Concatenate all words.
text = "".join(words)
return text
index = 10
print("original text:\n{}\n".format(x_train_words[index]))
print("tokens to string:\n{}".format(tokens_to_string(x_train_tokens[index])))
###Output
original text:
very very very happy
tokens to string:
very very very happy
###Markdown
Create the Recurrent Neural NetworkWe are now ready to create the Recurrent Neural Network (RNN). We will use the TensorFlow API.
###Code
# Set the hyperparameter set
batch_size = 4
max_epochs = 50
#embedding_size = 8
num_units = 16 # the number of nodes in RNN hidden layer
num_classes = 2 # Two classes [True, False]
initializer_scale = 0.1
learning_rate = 1e-3
###Output
_____no_output_____
###Markdown
Set up dataset with `tf.data` create input pipeline with `tf.data.Dataset`
###Code
## create data pipeline with tf.data
train_dataset = tf.data.Dataset.from_tensor_slices((x_train_pad, x_train_seq_length, y_train))
train_dataset = train_dataset.shuffle(buffer_size = 100)
train_dataset = train_dataset.repeat(max_epochs)
train_dataset = train_dataset.batch(batch_size = batch_size)
print(train_dataset)
###Output
<BatchDataset shapes: ((?, 20), (?,), (?,)), types: (tf.int32, tf.int32, tf.int32)>
###Markdown
Define Iterator
###Code
handle = tf.placeholder(tf.string, shape=[])
iterator = tf.data.Iterator.from_string_handle(handle,
train_dataset.output_types,
train_dataset.output_shapes)
seq_pad, seq_length, labels = iterator.get_next()
###Output
_____no_output_____
###Markdown
Define CharRNN class
###Code
class CharRNN:
def __init__(self, num_chars,
seq_pad, seq_length, labels,
num_units=num_units, num_classes=num_classes):
self.num_chars = num_chars
self.seq_pad = seq_pad
self.seq_length = seq_length
self.labels = labels
self.num_units = num_units
self.num_classes = num_classes
def build_embeddings(self):
with tf.variable_scope('embedding_layer'):
one_hot = tf.eye(self.num_chars, dtype=tf.float32)
one_hot_matrix = tf.get_variable(name='one_hot_embedding',
initializer=one_hot,
trainable=False) # embedding matrix: No training
self.embeddings = tf.nn.embedding_lookup(params=one_hot_matrix, ids=self.seq_pad)
def build_layers(self):
# bi-directional RNN cell
with tf.variable_scope('bi-directional_rnn_cell'):
cell_fw = rnn.BasicRNNCell(num_units=self.num_units)
cell_bw = rnn.BasicRNNCell(num_units=self.num_units)
_, states = tf.nn.bidirectional_dynamic_rnn(cell_fw, cell_bw, inputs=self.embeddings,
sequence_length=self.seq_length,
dtype=tf.float32)
self.final_state = tf.concat([states[0], states[1]], axis=1)
def build_outputs(self):
logits = slim.fully_connected(inputs=self.final_state,
num_outputs=self.num_classes,
activation_fn=None,
scope='logits')
return logits
def bce_loss(self):
one_hot_labels = tf.one_hot(self.labels, depth=self.num_classes)
loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=one_hot_labels,
logits=self.logits,
scope='binary_cross_entropy')
return loss
def predict(self):
with tf.variable_scope('predictions'):
predictions = tf.argmax(input=self.logits, axis=-1, output_type=tf.int32)
return predictions
def build(self):
self.global_step = tf.train.get_or_create_global_step()
self.build_embeddings()
self.build_layers()
self.logits = self.build_outputs()
self.loss = self.bce_loss()
self.predictions = self.predict()
print("complete model build.")
model = CharRNN(num_chars=num_chars,
seq_pad=seq_pad,
seq_length=seq_length,
labels=labels,
num_units=num_units,
num_classes=num_classes)
model.build()
# show info for trainable variables
t_vars = tf.trainable_variables()
slim.model_analyzer.analyze_vars(t_vars, print_info=True)
###Output
complete model build.
---------
Variables: name (type shape) [size]
---------
bi-directional_rnn_cell/bidirectional_rnn/fw/basic_rnn_cell/kernel:0 (float32_ref 41x16) [656, bytes: 2624]
bi-directional_rnn_cell/bidirectional_rnn/fw/basic_rnn_cell/bias:0 (float32_ref 16) [16, bytes: 64]
bi-directional_rnn_cell/bidirectional_rnn/bw/basic_rnn_cell/kernel:0 (float32_ref 41x16) [656, bytes: 2624]
bi-directional_rnn_cell/bidirectional_rnn/bw/basic_rnn_cell/bias:0 (float32_ref 16) [16, bytes: 64]
logits/weights:0 (float32_ref 32x2) [64, bytes: 256]
logits/biases:0 (float32_ref 2) [2, bytes: 8]
Total size of variables: 1410
Total bytes of variables: 5640
###Markdown
Creat training op
###Code
# create training op
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = optimizer.minimize(model.loss, global_step=model.global_step)
###Output
_____no_output_____
###Markdown
`tf.Session()` and train
###Code
train_dir = './train/seq_classification.birnn/exp1'
if not tf.gfile.Exists(train_dir):
print("mkdir: {}".format(train_dir))
tf.gfile.MakeDirs(train_dir)
else:
print("already exist!")
saver = tf.train.Saver(tf.global_variables(), max_to_keep=1000)
tf.logging.info('Start Session.')
sess = tf.Session(config=sess_config)
sess.run(tf.global_variables_initializer())
train_iterator = train_dataset.make_one_shot_iterator()
train_handle = sess.run(train_iterator.string_handle())
tf.logging.info('Start train.')
# save loss values for plot
loss_history = []
pre_epochs = 0
while True:
try:
start_time = time.time()
_, global_step, loss = sess.run([train_op,
model.global_step,
model.loss],
feed_dict={handle: train_handle})
epochs = global_step * batch_size / float(len(x_train_words))
duration = time.time() - start_time
print_steps = 1
if global_step % print_steps == 0:
clear_output(wait=True)
examples_per_sec = batch_size / float(duration)
print("Epochs: {:.3f} global_step: {} loss: {:.3f} ({:.2f} examples/sec; {:.3f} sec/batch)".format(
epochs, global_step, loss, examples_per_sec, duration))
loss_history.append([epochs, loss])
# save model checkpoint periodically
save_epochs = 10
if int(epochs) % save_epochs == 0 and pre_epochs != int(epochs):
tf.logging.info('Saving model with global step {} (= {} epochs) to disk.'.format(global_step, int(epochs)))
saver.save(sess, train_dir + 'model.ckpt', global_step=global_step)
pre_epochs = int(epochs)
except tf.errors.OutOfRangeError:
print("End of dataset") # ==> "End of dataset"
tf.logging.info('Saving model with global step {} (= {} epochs) to disk.'.format(global_step, int(epochs)))
saver.save(sess, train_dir + 'model.ckpt', global_step=global_step)
break
tf.logging.info('complete training...')
###Output
Epochs: 50.000 global_step: 150 loss: 0.075 (727.93 examples/sec; 0.005 sec/batch)
INFO:tensorflow:Saving model with global step 150 (= 50 epochs) to disk.
End of dataset
INFO:tensorflow:Saving model with global step 150 (= 50 epochs) to disk.
INFO:tensorflow:complete training...
###Markdown
Plot the loss
###Code
loss_history = np.array(loss_history)
plt.plot(loss_history[:,0], loss_history[:,1], label='train')
###Output
_____no_output_____
###Markdown
Train accuracy and predcition
###Code
train_dataset_eval = tf.data.Dataset.from_tensor_slices((x_train_pad, x_train_seq_length, y_train))
train_dataset_eval = train_dataset_eval.batch(batch_size = len(x_train_pad))
train_iterator_eval = train_dataset_eval.make_initializable_iterator()
train_handle_eval = sess.run(train_iterator_eval.string_handle())
sess.run(train_iterator_eval.initializer)
accuracy, acc_op = tf.metrics.accuracy(labels=labels, predictions=model.predictions, name='accuracy')
sess.run(tf.local_variables_initializer())
sess.run(acc_op, feed_dict={handle: train_handle_eval})
print("training accuracy:", sess.run(accuracy))
sess.run(train_iterator_eval.initializer)
x_test_pad, y_pred = sess.run([model.seq_pad, model.predictions],
feed_dict={handle: train_handle_eval})
for x, y in zip(x_test_pad, y_pred):
if y == 0:
print("{} : good".format(tokens_to_string(x)))
else:
print("{} : bad".format(tokens_to_string(x)))
###Output
good : good
bad : bad
amazing : good
so good : good
bull shit : bad
awesome : good
how dare : bad
very much : good
nice : good
god damn it : bad
very very very happy : good
what the fuck : bad
|
intro_to_neural_nets.ipynb
|
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/Denny143/Auto-report/blob/master/intro_to_neural_nets.ipynb) Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Intro to Neural Networks **Learning Objectives:** * Define a neural network (NN) and its hidden layers using the TensorFlow `DNNRegressor` class * Train a neural network to learn nonlinearities in a dataset and achieve better performance than a linear regression model In the previous exercises, we used synthetic features to help our model incorporate nonlinearities.One important set of nonlinearities was around latitude and longitude, but there may be others.We'll also switch back, for now, to a standard regression task, rather than the logistic regression task from the previous exercise. That is, we'll be predicting `median_house_value` directly. SetupFirst, let's load and prepare the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
Training examples summary:
###Markdown
Building a Neural NetworkThe NN is defined by the [DNNRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNRegressor) class.Use **`hidden_units`** to define the structure of the NN. The `hidden_units` argument provides a list of ints, where each int corresponds to a hidden layer and indicates the number of nodes in it. For example, consider the following assignment:`hidden_units=[3,10]`The preceding assignment specifies a neural net with two hidden layers:* The first hidden layer contains 3 nodes.* The second hidden layer contains 10 nodes.If we wanted to add more layers, we'd add more ints to the list. For example, `hidden_units=[10,20,30,40]` would create four layers with ten, twenty, thirty, and forty units, respectively.By default, all hidden layers will use ReLu activation and will be fully connected.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural net regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `DNNRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer,
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor
###Output
_____no_output_____
###Markdown
Task 1: Train a NN Model**Adjust hyperparameters, aiming to drop RMSE below 110.**Run the following block to train a NN model. Recall that in the linear regression exercise with many features, an RMSE of 110 or so was pretty good. We'll aim to beat that.Your task here is to modify various learning settings to improve accuracy on validation data.Overfitting is a real potential hazard for NNs. You can look at the gap between loss on training data and loss on validation data to help judge if your model is starting to overfit. If the gap starts to grow, that is usually a sure sign of overfitting.Because of the number of different possible settings, it's strongly recommended that you take notes on each trial to help guide your development process.Also, when you get a good setting, try running it multiple times and see how repeatable your result is. NN weights are typically initialized to small random values, so you should see differences from run to run.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.01,
steps=500,
batch_size=10,
hidden_units=[10, 2],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 212.96
period 01 : 178.46
period 02 : 172.83
period 03 : 187.06
period 04 : 164.98
period 05 : 224.55
period 06 : 142.60
period 07 : 143.46
period 08 : 146.05
period 09 : 135.72
Model training finished.
Final RMSE (on training data): 135.72
Final RMSE (on validation data): 135.77
###Markdown
SolutionClick below to see a possible solution **NOTE:** This selection of parameters is somewhat arbitrary. Here we've tried combinations that are increasingly complex, combined with training for longer, until the error falls below our objective. This may not be the best combination; others may attain an even lower RMSE. If your aim is to find the model that can attain the best error, then you'll want to use a more rigorous process, like a parameter search.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=2000,
batch_size=100,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 169.96
period 01 : 159.07
period 02 : 152.36
period 03 : 143.82
period 04 : 135.26
period 05 : 126.41
period 06 : 119.06
period 07 : 119.14
period 08 : 109.82
period 09 : 108.30
Model training finished.
Final RMSE (on training data): 108.30
Final RMSE (on validation data): 109.29
###Markdown
Task 2: Evaluate on Test Data**Confirm that your validation performance results hold up on test data.**Once you have a model you're happy with, evaluate it on test data to compare that to validation performance.Reminder, the test data set is located [here](https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv).
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution. Similar to what the code at the top does, we just need to load the appropriate data file, preprocess it and call predict and mean_squared_error.Note that we don't have to randomize the test data, since we will use all records.
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
###Output
Final RMSE (on test data): 107.00
###Markdown
Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Intro to Neural Networks **Learning Objectives:** * Define a neural network (NN) and its hidden layers using the TensorFlow `DNNRegressor` class * Train a neural network to learn nonlinearities in a dataset and achieve better performance than a linear regression model In the previous exercises, we used synthetic features to help our model incorporate nonlinearities.One important set of nonlinearities was around latitude and longitude, but there may be others.We'll also switch back, for now, to a standard regression task, rather than the logistic regression task from the previous exercise. That is, we'll be predicting `median_house_value` directly. SetupFirst, let's load and prepare the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
_____no_output_____
###Markdown
Building a Neural NetworkThe NN is defined by the [DNNRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNRegressor) class.Use **`hidden_units`** to define the structure of the NN. The `hidden_units` argument provides a list of ints, where each int corresponds to a hidden layer and indicates the number of nodes in it. For example, consider the following assignment:`hidden_units=[3,10]`The preceding assignment specifies a neural net with two hidden layers:* The first hidden layer contains 3 nodes.* The second hidden layer contains 10 nodes.If we wanted to add more layers, we'd add more ints to the list. For example, `hidden_units=[10,20,30,40]` would create four layers with ten, twenty, thirty, and forty units, respectively.By default, all hidden layers will use ReLu activation and will be fully connected.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural net regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `DNNRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer,
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor
###Output
_____no_output_____
###Markdown
Task 1: Train a NN Model**Adjust hyperparameters, aiming to drop RMSE below 110.**Run the following block to train a NN model. Recall that in the linear regression exercise with many features, an RMSE of 110 or so was pretty good. We'll aim to beat that.Your task here is to modify various learning settings to improve accuracy on validation data.Overfitting is a real potential hazard for NNs. You can look at the gap between loss on training data and loss on validation data to help judge if your model is starting to overfit. If the gap starts to grow, that is usually a sure sign of overfitting.Because of the number of different possible settings, it's strongly recommended that you take notes on each trial to help guide your development process.Also, when you get a good setting, try running it multiple times and see how repeatable your result is. NN weights are typically initialized to small random values, so you should see differences from run to run.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.01,
steps=500,
batch_size=10,
hidden_units=[10, 2],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution **NOTE:** This selection of parameters is somewhat arbitrary. Here we've tried combinations that are increasingly complex, combined with training for longer, until the error falls below our objective. This may not be the best combination; others may attain an even lower RMSE. If your aim is to find the model that can attain the best error, then you'll want to use a more rigorous process, like a parameter search.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=2000,
batch_size=100,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Evaluate on Test Data**Confirm that your validation performance results hold up on test data.**Once you have a model you're happy with, evaluate it on test data to compare that to validation performance.Reminder, the test data set is located [here](https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv).
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution. Similar to what the code at the top does, we just need to load the appropriate data file, preprocess it and call predict and mean_squared_error.Note that we don't have to randomize the test data, since we will use all records.
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
###Output
_____no_output_____
###Markdown
Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Introducción a las redes neuronales **Objetivos de aprendizaje:** * definir una red neuronal (RN) y sus capas ocultas a través de la clase `DNNRegressor` de TensorFlow * entrenar una red neuronal para aprender no linealidades en un conjunto de datos y lograr un mejor rendimiento que un modelo de regresión lineal En los ejercicios anteriores, usamos atributos sintéticos para ayudar a nuestro modelo a incorporar no linealidades.Había un conjunto de no linealidades importante en torno a latitud y longitud, pero pueden existir otros.Por el momento también volveremos a una tarea de regresión estándar, en lugar de la tarea de regresión logística del ejercicio anterior. Esto significa que prediremos `median_house_value` directamente. PreparaciónPrimero, carguemos y preparemos los datos.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
%tensorflow_version 1.x
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
_____no_output_____
###Markdown
Creación de una red neuronalLa RN se define a través de la clase [DNNRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNRegressor).Usa **`hidden_units`** para definir la estructura de la RN. El argumento `hidden_units` proporciona una lista de enteros, en la que cada entero corresponde a una capa oculta e indica el número de nodos en ella. Por ejemplo, considera la siguiente asignación:`hidden_units=[3,10]`La asignación anterior especifica una red neuronal con dos capas ocultas:* La primera capa oculta contiene 3 nodos.* La segunda capa oculta contiene 10 nodos.Si quisiéramos agregar más capas, incorporaríamos más enteros a la lista. Por ejemplo, `hidden_units=[10,20,30,40]` crearía cuatro capas con diez, veinte, treinta y cuarenta unidades, respectivamente.De forma predeterminada, todas las capas ocultas usarán activación ReLu y estarán totalmente conectadas.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural net regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `DNNRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor
###Output
_____no_output_____
###Markdown
Tarea 1: Entrenar un modelo de RN**Ajusta los hiperparámetros con la intención de disminuir el RMSE por debajo de 110.**Ejecuta el siguiente bloque para entrenar un modelo de RN. Recuerda que, en el ejercicio de regresión lineal, con muchos atributos, un RMSE de aproximadamente 110 era bastante bueno. Intentaremos mejorarlo.Tu tarea aquí es modificar las distintas configuraciones de aprendizaje para mejorar la exactitud en los datos de validación.El sobreajuste es un verdadero riesgo potencial para las RN. Puedes observar la brecha entre la pérdida en los datos de entrenamiento y la pérdida en los datos de validación para determinar si tu modelo está comenzando a tener un sobreajuste. Si la brecha comienza a crecer, por lo general es un indicador seguro de sobreajuste.Debido al número de las diferentes configuraciones posibles, se recomienda enfáticamente tomar nota de cada prueba como guía para el proceso de desarrollo.Además, cuando obtengas una buena configuración, prueba ejecutarla varias veces y observa qué tan constante es el resultado. Las ponderaciones de RN suelen inicializarse con valores aleatorios pequeños, de manera que debes observar las diferencias entre una ejecución y otra.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.01,
steps=500,
batch_size=10,
hidden_units=[10, 2],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SoluciónHaz clic más abajo para ver una solución posible. **NOTA:** Esta selección de parámetros es algo arbitraria. Aquí probamos combinaciones que son cada vez más complejas, combinadas con el entrenamiento durante más tiempo, hasta que el error se encuentra por debajo de nuestro objetivo. Esta no es de ningún modo la mejor combinación; otras pueden alcanzar un RMSE incluso más bajo. Si intentas encontrar el modelo que pueda alcanzar el mejor error, deberás usar un proceso más riguroso, como una búsqueda de parámetros.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=2000,
batch_size=100,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Tarea 2: Evaluar los datos de prueba**Confirma que los resultados de tu rendimiento de validación respalden los datos de prueba.**Una vez que tengas un modelo con el que estés satisfecho, evalúalo con los datos de prueba para compararlos con el rendimiento de validación.Recuerda que el conjunto de datos de prueba está ubicado [aquí](https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv).
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
SoluciónHaz clic más abajo para ver una solución posible. De manera similar a lo que hace el código de la parte superior, tenemos que cargar el archivo de datos adecuado, procesarlo previamente y llamar a predict y mean_squared_error.Ten en cuenta que no necesitamos aleatorizar los datos de prueba, dado que usaremos todos los registros.
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/DillipKS/MLCC_assignments/blob/master/intro_to_neural_nets.ipynb) Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Intro to Neural Networks **Learning Objectives:** * Define a neural network (NN) and its hidden layers using the TensorFlow `DNNRegressor` class * Train a neural network to learn nonlinearities in a dataset and achieve better performance than a linear regression model In the previous exercises, we used synthetic features to help our model incorporate nonlinearities.One important set of nonlinearities was around latitude and longitude, but there may be others.We'll also switch back, for now, to a standard regression task, rather than the logistic regression task from the previous exercise. That is, we'll be predicting `median_house_value` directly. SetupFirst, let's load and prepare the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
Training examples summary:
###Markdown
Building a Neural NetworkThe NN is defined by the [DNNRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNRegressor) class.Use **`hidden_units`** to define the structure of the NN. The `hidden_units` argument provides a list of ints, where each int corresponds to a hidden layer and indicates the number of nodes in it. For example, consider the following assignment:`hidden_units=[3,10]`The preceding assignment specifies a neural net with two hidden layers:* The first hidden layer contains 3 nodes.* The second hidden layer contains 10 nodes.If we wanted to add more layers, we'd add more ints to the list. For example, `hidden_units=[10,20,30,40]` would create four layers with ten, twenty, thirty, and forty units, respectively.By default, all hidden layers will use ReLu activation and will be fully connected.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural net regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `DNNRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer,
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor
###Output
_____no_output_____
###Markdown
Task 1: Train a NN Model**Adjust hyperparameters, aiming to drop RMSE below 110.**Run the following block to train a NN model. Recall that in the linear regression exercise with many features, an RMSE of 110 or so was pretty good. We'll aim to beat that.Your task here is to modify various learning settings to improve accuracy on validation data.Overfitting is a real potential hazard for NNs. You can look at the gap between loss on training data and loss on validation data to help judge if your model is starting to overfit. If the gap starts to grow, that is usually a sure sign of overfitting.Because of the number of different possible settings, it's strongly recommended that you take notes on each trial to help guide your development process.Also, when you get a good setting, try running it multiple times and see how repeatable your result is. NN weights are typically initialized to small random values, so you should see differences from run to run.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.02,
steps=3600,
batch_size=20,
hidden_units=[10, 2],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 227.97
period 01 : 171.38
period 02 : 123.85
period 03 : 109.22
period 04 : 103.48
period 05 : 106.44
period 06 : 118.28
period 07 : 128.88
period 08 : 139.22
period 09 : 106.04
Model training finished.
Final RMSE (on training data): 106.04
Final RMSE (on validation data): 106.43
###Markdown
SolutionClick below to see a possible solution **NOTE:** This selection of parameters is somewhat arbitrary. Here we've tried combinations that are increasingly complex, combined with training for longer, until the error falls below our objective. This may not be the best combination; others may attain an even lower RMSE. If your aim is to find the model that can attain the best error, then you'll want to use a more rigorous process, like a parameter search.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=2000,
batch_size=100,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Evaluate on Test Data**Confirm that your validation performance results hold up on test data.**Once you have a model you're happy with, evaluate it on test data to compare that to validation performance.Reminder, the test data set is located [here](https://storage.googleapis.com/mledu-datasets/california_housing_test.csv).
###Code
california_housing_test_data = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_test.csv", sep=",")
# YOUR CODE HERE
test_examples = preprocess_features(california_housing_test_data)
display.display(test_examples.describe())
test_targets = preprocess_targets(california_housing_test_data)
display.display(test_targets.describe())
predict_test_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_test_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
test_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print("RMSE (on test data): %0.2f" % test_root_mean_squared_error)
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution. Similar to what the code at the top does, we just need to load the appropriate data file, preprocess it and call predict and mean_squared_error.Note that we don't have to randomize the test data, since we will use all records.
###Code
california_housing_test_data = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_test.csv", sep=",")
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/rogueai/tensorflow-crash-course/blob/master/intro_to_neural_nets.ipynb) Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Intro to Neural Networks **Learning Objectives:** * Define a neural network (NN) and its hidden layers using the TensorFlow `DNNRegressor` class * Train a neural network to learn nonlinearities in a dataset and achieve better performance than a linear regression model In the previous exercises, we used synthetic features to help our model incorporate nonlinearities.One important set of nonlinearities was around latitude and longitude, but there may be others.We'll also switch back, for now, to a standard regression task, rather than the logistic regression task from the previous exercise. That is, we'll be predicting `median_house_value` directly. SetupFirst, let's load and prepare the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
_____no_output_____
###Markdown
Building a Neural NetworkThe NN is defined by the [DNNRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNRegressor) class.Use **`hidden_units`** to define the structure of the NN. The `hidden_units` argument provides a list of ints, where each int corresponds to a hidden layer and indicates the number of nodes in it. For example, consider the following assignment:`hidden_units=[3,10]`The preceding assignment specifies a neural net with two hidden layers:* The first hidden layer contains 3 nodes.* The second hidden layer contains 10 nodes.If we wanted to add more layers, we'd add more ints to the list. For example, `hidden_units=[10,20,30,40]` would create four layers with ten, twenty, thirty, and forty units, respectively.By default, all hidden layers will use ReLu activation and will be fully connected.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural net regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `DNNRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer,
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor
###Output
_____no_output_____
###Markdown
Task 1: Train a NN Model**Adjust hyperparameters, aiming to drop RMSE below 110.**Run the following block to train a NN model. Recall that in the linear regression exercise with many features, an RMSE of 110 or so was pretty good. We'll aim to beat that.Your task here is to modify various learning settings to improve accuracy on validation data.Overfitting is a real potential hazard for NNs. You can look at the gap between loss on training data and loss on validation data to help judge if your model is starting to overfit. If the gap starts to grow, that is usually a sure sign of overfitting.Because of the number of different possible settings, it's strongly recommended that you take notes on each trial to help guide your development process.Also, when you get a good setting, try running it multiple times and see how repeatable your result is. NN weights are typically initialized to small random values, so you should see differences from run to run.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.01,
steps=500,
batch_size=10,
hidden_units=[10, 2],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution **NOTE:** This selection of parameters is somewhat arbitrary. Here we've tried combinations that are increasingly complex, combined with training for longer, until the error falls below our objective. This may not be the best combination; others may attain an even lower RMSE. If your aim is to find the model that can attain the best error, then you'll want to use a more rigorous process, like a parameter search.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=2000,
batch_size=100,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Evaluate on Test Data**Confirm that your validation performance results hold up on test data.**Once you have a model you're happy with, evaluate it on test data to compare that to validation performance.Reminder, the test data set is located [here](https://storage.googleapis.com/mledu-datasets/california_housing_test.csv).
###Code
california_housing_test_data = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_test.csv", sep=",")
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution. Similar to what the code at the top does, we just need to load the appropriate data file, preprocess it and call predict and mean_squared_error.Note that we don't have to randomize the test data, since we will use all records.
###Code
california_housing_test_data = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_test.csv", sep=",")
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/rksharma55555/handson-ml/blob/master/intro_to_neural_nets.ipynb) Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Intro to Neural Networks **Learning Objectives:** * Define a neural network (NN) and its hidden layers using the TensorFlow `DNNRegressor` class * Train a neural network to learn nonlinearities in a dataset and achieve better performance than a linear regression model In the previous exercises, we used synthetic features to help our model incorporate nonlinearities.One important set of nonlinearities was around latitude and longitude, but there may be others.We'll also switch back, for now, to a standard regression task, rather than the logistic regression task from the previous exercise. That is, we'll be predicting `median_house_value` directly. SetupFirst, let's load and prepare the data.
###Code
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
print(california_housing_dataframe.head())
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print "Training examples summary:"
display.display(training_examples.describe())
print "Validation examples summary:"
display.display(validation_examples.describe())
print "Training targets summary:"
display.display(training_targets.describe())
print "Validation targets summary:"
display.display(validation_targets.describe())
###Output
Training examples summary:
###Markdown
Building a Neural NetworkThe NN is defined by the [DNNRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNRegressor) class.Use **`hidden_units`** to define the structure of the NN. The `hidden_units` argument provides a list of ints, where each int corresponds to a hidden layer and indicates the number of nodes in it. For example, consider the following assignment:`hidden_units=[3,10]`The preceding assignment specifies a neural net with two hidden layers:* The first hidden layer contains 3 nodes.* The second hidden layer contains 10 nodes.If we wanted to add more layers, we'd add more ints to the list. For example, `hidden_units=[10,20,30,40]` would create four layers with ten, twenty, thirty, and forty units, respectively.By default, all hidden layers will use ReLu activation and will be fully connected.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural net regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `DNNRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer,
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print "Training model..."
print "RMSE (on training data):"
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print " period %02d : %0.2f" % (period, training_root_mean_squared_error)
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print "Model training finished."
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print "Final RMSE (on training data): %0.2f" % training_root_mean_squared_error
print "Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error
return dnn_regressor
###Output
_____no_output_____
###Markdown
Task 1: Train a NN Model**Adjust hyperparameters, aiming to drop RMSE below 110.**Run the following block to train a NN model. Recall that in the linear regression exercise with many features, an RMSE of 110 or so was pretty good. We'll aim to beat that.Your task here is to modify various learning settings to improve accuracy on validation data.Overfitting is a real potential hazard for NNs. You can look at the gap between loss on training data and loss on validation data to help judge if your model is starting to overfit. If the gap starts to grow, that is usually a sure sign of overfitting.Because of the number of different possible settings, it's strongly recommended that you take notes on each trial to help guide your development process.Also, when you get a good setting, try running it multiple times and see how repeatable your result is. NN weights are typically initialized to small random values, so you should see differences from run to run.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.01,
steps=500,
batch_size=10,
hidden_units=[10, 2],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 164.92
period 01 : 145.11
period 02 : 203.47
period 03 : 134.53
period 04 : 138.92
period 05 : 166.97
period 06 : 124.47
period 07 : 154.76
period 08 : 131.48
period 09 : 132.01
Model training finished.
Final RMSE (on training data): 132.01
Final RMSE (on validation data): 133.15
###Markdown
SolutionClick below to see a possible solution **NOTE:** This selection of parameters is somewhat arbitrary. Here we've tried combinations that are increasingly complex, combined with training for longer, until the error falls below our objective. This may not be the best combination; others may attain an even lower RMSE. If your aim is to find the model that can attain the best error, then you'll want to use a more rigorous process, like a parameter search.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=2000,
batch_size=100,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 164.10
period 01 : 161.36
period 02 : 160.70
period 03 : 153.90
period 04 : 149.84
period 05 : 151.69
period 06 : 135.98
period 07 : 129.83
period 08 : 118.84
period 09 : 112.54
Model training finished.
Final RMSE (on training data): 112.54
Final RMSE (on validation data): 114.56
###Markdown
Task 2: Evaluate on Test Data**Confirm that your validation performance results hold up on test data.**Once you have a model you're happy with, evaluate it on test data to compare that to validation performance.Reminder, the test data set is located [here](https://storage.googleapis.com/mledu-datasets/california_housing_test.csv).
###Code
california_housing_test_data = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_test.csv", sep=",")
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution. Similar to what the code at the top does, we just need to load the appropriate data file, preprocess it and call predict and mean_squared_error.Note that we don't have to randomize the test data, since we will use all records.
###Code
california_housing_test_data = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_test.csv", sep=",")
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print "Final RMSE (on test data): %0.2f" % root_mean_squared_error
###Output
Final RMSE (on test data): 131.26
###Markdown
Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Intro to Neural Networks **Learning Objectives:** * Define a neural network (NN) and its hidden layers using the TensorFlow `DNNRegressor` class * Train a neural network to learn nonlinearities in a dataset and achieve better performance than a linear regression model In the previous exercises, we used synthetic features to help our model incorporate nonlinearities.One important set of nonlinearities was around latitude and longitude, but there may be others.We'll also switch back, for now, to a standard regression task, rather than the logistic regression task from the previous exercise. That is, we'll be predicting `median_house_value` directly. SetupFirst, let's load and prepare the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
_____no_output_____
###Markdown
Building a Neural NetworkThe NN is defined by the [DNNRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNRegressor) class.Use **`hidden_units`** to define the structure of the NN. The `hidden_units` argument provides a list of ints, where each int corresponds to a hidden layer and indicates the number of nodes in it. For example, consider the following assignment:`hidden_units=[3,10]`The preceding assignment specifies a neural net with two hidden layers:* The first hidden layer contains 3 nodes.* The second hidden layer contains 10 nodes.If we wanted to add more layers, we'd add more ints to the list. For example, `hidden_units=[10,20,30,40]` would create four layers with ten, twenty, thirty, and forty units, respectively.By default, all hidden layers will use ReLu activation and will be fully connected.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural net regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `DNNRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer,
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor
###Output
_____no_output_____
###Markdown
Task 1: Train a NN Model**Adjust hyperparameters, aiming to drop RMSE below 110.**Run the following block to train a NN model. Recall that in the linear regression exercise with many features, an RMSE of 110 or so was pretty good. We'll aim to beat that.Your task here is to modify various learning settings to improve accuracy on validation data.Overfitting is a real potential hazard for NNs. You can look at the gap between loss on training data and loss on validation data to help judge if your model is starting to overfit. If the gap starts to grow, that is usually a sure sign of overfitting.Because of the number of different possible settings, it's strongly recommended that you take notes on each trial to help guide your development process.Also, when you get a good setting, try running it multiple times and see how repeatable your result is. NN weights are typically initialized to small random values, so you should see differences from run to run.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.01,
steps=500,
batch_size=10,
hidden_units=[10, 2],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution **NOTE:** This selection of parameters is somewhat arbitrary. Here we've tried combinations that are increasingly complex, combined with training for longer, until the error falls below our objective (training is nondeterministic, so results may fluctuate a bit each time you run the solution). This may not be the best combination; others may attain an even lower RMSE. If your aim is to find the model that can attain the best error, then you'll want to use a more rigorous process, like a parameter search.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=2000,
batch_size=100,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Evaluate on Test Data**Confirm that your validation performance results hold up on test data.**Once you have a model you're happy with, evaluate it on test data to compare that to validation performance.Reminder, the test data set is located [here](https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv).
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution. Similar to what the code at the top does, we just need to load the appropriate data file, preprocess it and call predict and mean_squared_error.Note that we don't have to randomize the test data, since we will use all records.
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/billguo2017/IMFS-Summer-Kaggle-1/blob/master/intro_to_neural_nets.ipynb) Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Intro to Neural Networks **Learning Objectives:** * Define a neural network (NN) and its hidden layers using the TensorFlow `DNNRegressor` class * Train a neural network to learn nonlinearities in a dataset and achieve better performance than a linear regression model In the previous exercises, we used synthetic features to help our model incorporate nonlinearities.One important set of nonlinearities was around latitude and longitude, but there may be others.We'll also switch back, for now, to a standard regression task, rather than the logistic regression task from the previous exercise. That is, we'll be predicting `median_house_value` directly. SetupFirst, let's load and prepare the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://dl.google.com/mlcc/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
_____no_output_____
###Markdown
Building a Neural NetworkThe NN is defined by the [DNNRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNRegressor) class.Use **`hidden_units`** to define the structure of the NN. The `hidden_units` argument provides a list of ints, where each int corresponds to a hidden layer and indicates the number of nodes in it. For example, consider the following assignment:`hidden_units=[3,10]`The preceding assignment specifies a neural net with two hidden layers:* The first hidden layer contains 3 nodes.* The second hidden layer contains 10 nodes.If we wanted to add more layers, we'd add more ints to the list. For example, `hidden_units=[10,20,30,40]` would create four layers with ten, twenty, thirty, and forty units, respectively.By default, all hidden layers will use ReLu activation and will be fully connected.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural net regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `DNNRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer,
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor
###Output
_____no_output_____
###Markdown
Task 1: Train a NN Model**Adjust hyperparameters, aiming to drop RMSE below 110.**Run the following block to train a NN model. Recall that in the linear regression exercise with many features, an RMSE of 110 or so was pretty good. We'll aim to beat that.Your task here is to modify various learning settings to improve accuracy on validation data.Overfitting is a real potential hazard for NNs. You can look at the gap between loss on training data and loss on validation data to help judge if your model is starting to overfit. If the gap starts to grow, that is usually a sure sign of overfitting.Because of the number of different possible settings, it's strongly recommended that you take notes on each trial to help guide your development process.Also, when you get a good setting, try running it multiple times and see how repeatable your result is. NN weights are typically initialized to small random values, so you should see differences from run to run.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.01,
steps=500,
batch_size=10,
hidden_units=[10, 2],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution **NOTE:** This selection of parameters is somewhat arbitrary. Here we've tried combinations that are increasingly complex, combined with training for longer, until the error falls below our objective. This may not be the best combination; others may attain an even lower RMSE. If your aim is to find the model that can attain the best error, then you'll want to use a more rigorous process, like a parameter search.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=2000,
batch_size=100,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Evaluate on Test Data**Confirm that your validation performance results hold up on test data.**Once you have a model you're happy with, evaluate it on test data to compare that to validation performance.Reminder, the test data set is located [here](https://dl.google.com/mlcc/mledu-datasets/california_housing_test.csv).
###Code
california_housing_test_data = pd.read_csv("https://dl.google.com/mlcc/mledu-datasets/california_housing_test.csv", sep=",")
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution. Similar to what the code at the top does, we just need to load the appropriate data file, preprocess it and call predict and mean_squared_error.Note that we don't have to randomize the test data, since we will use all records.
###Code
california_housing_test_data = pd.read_csv("https://dl.google.com/mlcc/mledu-datasets/california_housing_test.csv", sep=",")
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
###Output
_____no_output_____
###Markdown
Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Intro to Neural Networks **Learning Objectives:** * Define a neural network (NN) and its hidden layers using the TensorFlow `DNNRegressor` class * Train a neural network to learn nonlinearities in a dataset and achieve better performance than a linear regression model In the previous exercises, we used synthetic features to help our model incorporate nonlinearities.One important set of nonlinearities was around latitude and longitude, but there may be others.We'll also switch back, for now, to a standard regression task, rather than the logistic regression task from the previous exercise. That is, we'll be predicting `median_house_value` directly. SetupFirst, let's load and prepare the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
_____no_output_____
###Markdown
Building a Neural NetworkThe NN is defined by the [DNNRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNRegressor) class.Use **`hidden_units`** to define the structure of the NN. The `hidden_units` argument provides a list of ints, where each int corresponds to a hidden layer and indicates the number of nodes in it. For example, consider the following assignment:`hidden_units=[3,10]`The preceding assignment specifies a neural net with two hidden layers:* The first hidden layer contains 3 nodes.* The second hidden layer contains 10 nodes.If we wanted to add more layers, we'd add more ints to the list. For example, `hidden_units=[10,20,30,40]` would create four layers with ten, twenty, thirty, and forty units, respectively.By default, all hidden layers will use ReLu activation and will be fully connected.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural net regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `DNNRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer,
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor
###Output
_____no_output_____
###Markdown
Task 1: Train a NN Model**Adjust hyperparameters, aiming to drop RMSE below 110.**Run the following block to train a NN model. Recall that in the linear regression exercise with many features, an RMSE of 110 or so was pretty good. We'll aim to beat that.Your task here is to modify various learning settings to improve accuracy on validation data.Overfitting is a real potential hazard for NNs. You can look at the gap between loss on training data and loss on validation data to help judge if your model is starting to overfit. If the gap starts to grow, that is usually a sure sign of overfitting.Because of the number of different possible settings, it's strongly recommended that you take notes on each trial to help guide your development process.Also, when you get a good setting, try running it multiple times and see how repeatable your result is. NN weights are typically initialized to small random values, so you should see differences from run to run.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.01,
steps=500,
batch_size=10,
hidden_units=[10, 2],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution **NOTE:** This selection of parameters is somewhat arbitrary. Here we've tried combinations that are increasingly complex, combined with training for longer, until the error falls below our objective (training is nondeterministic, so results may fluctuate a bit each time you run the solution). This may not be the best combination; others may attain an even lower RMSE. If your aim is to find the model that can attain the best error, then you'll want to use a more rigorous process, like a parameter search.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=2000,
batch_size=100,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Evaluate on Test Data**Confirm that your validation performance results hold up on test data.**Once you have a model you're happy with, evaluate it on test data to compare that to validation performance.Reminder, the test data set is located [here](https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv).
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution. Similar to what the code at the top does, we just need to load the appropriate data file, preprocess it and call predict and mean_squared_error.Note that we don't have to randomize the test data, since we will use all records.
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/ArunkumarRamanan/Exercises-Machine-Learning-Crash-Course-Google-Developers/blob/master/intro_to_neural_nets.ipynb) Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Intro to Neural Networks **Learning Objectives:** * Define a neural network (NN) and its hidden layers using the TensorFlow `DNNRegressor` class * Train a neural network to learn nonlinearities in a dataset and achieve better performance than a linear regression model In the previous exercises, we used synthetic features to help our model incorporate nonlinearities.One important set of nonlinearities was around latitude and longitude, but there may be others.We'll also switch back, for now, to a standard regression task, rather than the logistic regression task from the previous exercise. That is, we'll be predicting `median_house_value` directly. SetupFirst, let's load and prepare the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://dl.google.com/mlcc/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
_____no_output_____
###Markdown
Building a Neural NetworkThe NN is defined by the [DNNRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNRegressor) class.Use **`hidden_units`** to define the structure of the NN. The `hidden_units` argument provides a list of ints, where each int corresponds to a hidden layer and indicates the number of nodes in it. For example, consider the following assignment:`hidden_units=[3,10]`The preceding assignment specifies a neural net with two hidden layers:* The first hidden layer contains 3 nodes.* The second hidden layer contains 10 nodes.If we wanted to add more layers, we'd add more ints to the list. For example, `hidden_units=[10,20,30,40]` would create four layers with ten, twenty, thirty, and forty units, respectively.By default, all hidden layers will use ReLu activation and will be fully connected.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural net regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `DNNRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer,
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor
###Output
_____no_output_____
###Markdown
Task 1: Train a NN Model**Adjust hyperparameters, aiming to drop RMSE below 110.**Run the following block to train a NN model. Recall that in the linear regression exercise with many features, an RMSE of 110 or so was pretty good. We'll aim to beat that.Your task here is to modify various learning settings to improve accuracy on validation data.Overfitting is a real potential hazard for NNs. You can look at the gap between loss on training data and loss on validation data to help judge if your model is starting to overfit. If the gap starts to grow, that is usually a sure sign of overfitting.Because of the number of different possible settings, it's strongly recommended that you take notes on each trial to help guide your development process.Also, when you get a good setting, try running it multiple times and see how repeatable your result is. NN weights are typically initialized to small random values, so you should see differences from run to run.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.01,
steps=500,
batch_size=10,
hidden_units=[10, 2],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution **NOTE:** This selection of parameters is somewhat arbitrary. Here we've tried combinations that are increasingly complex, combined with training for longer, until the error falls below our objective. This may not be the best combination; others may attain an even lower RMSE. If your aim is to find the model that can attain the best error, then you'll want to use a more rigorous process, like a parameter search.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=2000,
batch_size=100,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Evaluate on Test Data**Confirm that your validation performance results hold up on test data.**Once you have a model you're happy with, evaluate it on test data to compare that to validation performance.Reminder, the test data set is located [here](https://dl.google.com/mlcc/mledu-datasets/california_housing_test.csv).
###Code
california_housing_test_data = pd.read_csv("https://dl.google.com/mlcc/mledu-datasets/california_housing_test.csv", sep=",")
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution. Similar to what the code at the top does, we just need to load the appropriate data file, preprocess it and call predict and mean_squared_error.Note that we don't have to randomize the test data, since we will use all records.
###Code
california_housing_test_data = pd.read_csv("https://dl.google.com/mlcc/mledu-datasets/california_housing_test.csv", sep=",")
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
###Output
_____no_output_____
###Markdown
Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Intro to Neural Networks **Learning Objectives:** * Define a neural network (NN) and its hidden layers using the TensorFlow `DNNRegressor` class * Train a neural network to learn nonlinearities in a dataset and achieve better performance than a linear regression model In the previous exercises, we used synthetic features to help our model incorporate nonlinearities.One important set of nonlinearities was around latitude and longitude, but there may be others.We'll also switch back, for now, to a standard regression task, rather than the logistic regression task from the previous exercise. That is, we'll be predicting `median_house_value` directly. SetupFirst, let's load and prepare the data.
###Code
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print "Training examples summary:"
display.display(training_examples.describe())
print "Validation examples summary:"
display.display(validation_examples.describe())
print "Training targets summary:"
display.display(training_targets.describe())
print "Validation targets summary:"
display.display(validation_targets.describe())
###Output
Training examples summary:
###Markdown
Building a Neural NetworkThe NN is defined by the [DNNRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNRegressor) class.Use **`hidden_units`** to define the structure of the NN. The `hidden_units` argument provides a list of ints, where each int corresponds to a hidden layer and indicates the number of nodes in it. For example, consider the following assignment:`hidden_units=[3,10]`The preceding assignment specifies a neural net with two hidden layers:* The first hidden layer contains 3 nodes.* The second hidden layer contains 10 nodes.If we wanted to add more layers, we'd add more ints to the list. For example, `hidden_units=[10,20,30,40]` would create four layers with ten, twenty, thirty, and forty units, respectively.By default, all hidden layers will use ReLu activation and will be fully connected.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural net regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `DNNRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer,
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print "Training model..."
print "RMSE (on training data):"
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print " period %02d : %0.2f" % (period, training_root_mean_squared_error)
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print "Model training finished."
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print "Final RMSE (on training data): %0.2f" % training_root_mean_squared_error
print "Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error
return dnn_regressor
###Output
_____no_output_____
###Markdown
Task 1: Train a NN Model**Adjust hyperparameters, aiming to drop RMSE below 110.**Run the following block to train a NN model. Recall that in the linear regression exercise with many features, an RMSE of 110 or so was pretty good. We'll aim to beat that.Your task here is to modify various learning settings to improve accuracy on validation data.Overfitting is a real potential hazard for NNs. You can look at the gap between loss on training data and loss on validation data to help judge if your model is starting to overfit. If the gap starts to grow, that is usually a sure sign of overfitting.Because of the number of different possible settings, it's strongly recommended that you take notes on each trial to help guide your development process.Also, when you get a good setting, try running it multiple times and see how repeatable your result is. NN weights are typically initialized to small random values, so you should see differences from run to run.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=2000,
batch_size=100,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 151.70
period 01 : 147.40
period 02 : 132.29
period 03 : 123.17
period 04 : 120.59
period 05 : 113.08
period 06 : 113.06
period 07 : 111.55
period 08 : 107.37
period 09 : 107.69
Model training finished.
Final RMSE (on training data): 107.69
Final RMSE (on validation data): 108.10
###Markdown
SolutionClick below to see a possible solution **NOTE:** This selection of parameters is somewhat arbitrary. Here we've tried combinations that are increasingly complex, combined with training for longer, until the error falls below our objective. This may not be the best combination; others may attain an even lower RMSE. If your aim is to find the model that can attain the best error, then you'll want to use a more rigorous process, like a parameter search.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=2000,
batch_size=100,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Evaluate on Test Data**Confirm that your validation performance results hold up on test data.**Once you have a model you're happy with, evaluate it on test data to compare that to validation performance.Reminder, the test data set is located [here](https://storage.googleapis.com/mledu-datasets/california_housing_test.csv).
###Code
california_housing_test_data = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_test.csv", sep=",")
# YOUR CODE HERE
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print "Final RMSE (on test data): %0.2f" % root_mean_squared_error
###Output
Final RMSE (on test data): 105.80
###Markdown
SolutionClick below to see a possible solution. Similar to what the code at the top does, we just need to load the appropriate data file, preprocess it and call predict and mean_squared_error.Note that we don't have to randomize the test data, since we will use all records.
###Code
california_housing_test_data = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_test.csv", sep=",")
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print "Final RMSE (on test data): %0.2f" % root_mean_squared_error
###Output
_____no_output_____
###Markdown
Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Intro to Neural Networks **Learning Objectives:** * Define a neural network (NN) and its hidden layers using the TensorFlow `DNNRegressor` class * Train a neural network to learn nonlinearities in a dataset and achieve better performance than a linear regression model In the previous exercises, we used synthetic features to help our model incorporate nonlinearities.One important set of nonlinearities was around latitude and longitude, but there may be others.We'll also switch back, for now, to a standard regression task, rather than the logistic regression task from the previous exercise. That is, we'll be predicting `median_house_value` directly. SetupFirst, let's load and prepare the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
%tensorflow_version 1.x
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
_____no_output_____
###Markdown
Building a Neural NetworkThe NN is defined by the [DNNRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNRegressor) class.Use **`hidden_units`** to define the structure of the NN. The `hidden_units` argument provides a list of ints, where each int corresponds to a hidden layer and indicates the number of nodes in it. For example, consider the following assignment:`hidden_units=[3,10]`The preceding assignment specifies a neural net with two hidden layers:* The first hidden layer contains 3 nodes.* The second hidden layer contains 10 nodes.If we wanted to add more layers, we'd add more ints to the list. For example, `hidden_units=[10,20,30,40]` would create four layers with ten, twenty, thirty, and forty units, respectively.By default, all hidden layers will use ReLu activation and will be fully connected.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural net regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `DNNRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer,
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor
###Output
_____no_output_____
###Markdown
Task 1: Train a NN Model**Adjust hyperparameters, aiming to drop RMSE below 110.**Run the following block to train a NN model. Recall that in the linear regression exercise with many features, an RMSE of 110 or so was pretty good. We'll aim to beat that.Your task here is to modify various learning settings to improve accuracy on validation data.Overfitting is a real potential hazard for NNs. You can look at the gap between loss on training data and loss on validation data to help judge if your model is starting to overfit. If the gap starts to grow, that is usually a sure sign of overfitting.Because of the number of different possible settings, it's strongly recommended that you take notes on each trial to help guide your development process.Also, when you get a good setting, try running it multiple times and see how repeatable your result is. NN weights are typically initialized to small random values, so you should see differences from run to run.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.01,
steps=500,
batch_size=10,
hidden_units=[10, 2],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution **NOTE:** This selection of parameters is somewhat arbitrary. Here we've tried combinations that are increasingly complex, combined with training for longer, until the error falls below our objective (training is nondeterministic, so results may fluctuate a bit each time you run the solution). This may not be the best combination; others may attain an even lower RMSE. If your aim is to find the model that can attain the best error, then you'll want to use a more rigorous process, like a parameter search.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=2000,
batch_size=100,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Evaluate on Test Data**Confirm that your validation performance results hold up on test data.**Once you have a model you're happy with, evaluate it on test data to compare that to validation performance.Reminder, the test data set is located [here](https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv).
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution. Similar to what the code at the top does, we just need to load the appropriate data file, preprocess it and call predict and mean_squared_error.Note that we don't have to randomize the test data, since we will use all records.
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
###Output
_____no_output_____
###Markdown
Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Intro to Neural Networks **Learning Objectives:** * Define a neural network (NN) and its hidden layers using the TensorFlow `DNNRegressor` class * Train a neural network to learn nonlinearities in a dataset and achieve better performance than a linear regression model In the previous exercises, we used synthetic features to help our model incorporate nonlinearities.One important set of nonlinearities was around latitude and longitude, but there may be others.We'll also switch back, for now, to a standard regression task, rather than the logistic regression task from the previous exercise. That is, we'll be predicting `median_house_value` directly. SetupFirst, let's load and prepare the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
%tensorflow_version 1.x
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
_____no_output_____
###Markdown
Building a Neural NetworkThe NN is defined by the [DNNRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNRegressor) class.Use **`hidden_units`** to define the structure of the NN. The `hidden_units` argument provides a list of ints, where each int corresponds to a hidden layer and indicates the number of nodes in it. For example, consider the following assignment:`hidden_units=[3,10]`The preceding assignment specifies a neural net with two hidden layers:* The first hidden layer contains 3 nodes.* The second hidden layer contains 10 nodes.If we wanted to add more layers, we'd add more ints to the list. For example, `hidden_units=[10,20,30,40]` would create four layers with ten, twenty, thirty, and forty units, respectively.By default, all hidden layers will use ReLu activation and will be fully connected.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural net regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `DNNRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer,
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor
###Output
_____no_output_____
###Markdown
Task 1: Train a NN Model**Adjust hyperparameters, aiming to drop RMSE below 110.**Run the following block to train a NN model. Recall that in the linear regression exercise with many features, an RMSE of 110 or so was pretty good. We'll aim to beat that.Your task here is to modify various learning settings to improve accuracy on validation data.Overfitting is a real potential hazard for NNs. You can look at the gap between loss on training data and loss on validation data to help judge if your model is starting to overfit. If the gap starts to grow, that is usually a sure sign of overfitting.Because of the number of different possible settings, it's strongly recommended that you take notes on each trial to help guide your development process.Also, when you get a good setting, try running it multiple times and see how repeatable your result is. NN weights are typically initialized to small random values, so you should see differences from run to run.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.01,
steps=500,
batch_size=10,
hidden_units=[10, 2],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution **NOTE:** This selection of parameters is somewhat arbitrary. Here we've tried combinations that are increasingly complex, combined with training for longer, until the error falls below our objective (training is nondeterministic, so results may fluctuate a bit each time you run the solution). This may not be the best combination; others may attain an even lower RMSE. If your aim is to find the model that can attain the best error, then you'll want to use a more rigorous process, like a parameter search.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=2000,
batch_size=100,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Evaluate on Test Data**Confirm that your validation performance results hold up on test data.**Once you have a model you're happy with, evaluate it on test data to compare that to validation performance.Reminder, the test data set is located [here](https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv).
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution. Similar to what the code at the top does, we just need to load the appropriate data file, preprocess it and call predict and mean_squared_error.Note that we don't have to randomize the test data, since we will use all records.
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/nikhilbhatewara/GoogleMachineLearningCrashCourse/blob/master/intro_to_neural_nets.ipynb) Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Intro to Neural Networks **Learning Objectives:** * Define a neural network (NN) and its hidden layers using the TensorFlow `DNNRegressor` class * Train a neural network to learn nonlinearities in a dataset and achieve better performance than a linear regression model In the previous exercises, we used synthetic features to help our model incorporate nonlinearities.One important set of nonlinearities was around latitude and longitude, but there may be others.We'll also switch back, for now, to a standard regression task, rather than the logistic regression task from the previous exercise. That is, we'll be predicting `median_house_value` directly. SetupFirst, let's load and prepare the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
Training examples summary:
###Markdown
Building a Neural NetworkThe NN is defined by the [DNNRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNRegressor) class.Use **`hidden_units`** to define the structure of the NN. The `hidden_units` argument provides a list of ints, where each int corresponds to a hidden layer and indicates the number of nodes in it. For example, consider the following assignment:`hidden_units=[3,10]`The preceding assignment specifies a neural net with two hidden layers:* The first hidden layer contains 3 nodes.* The second hidden layer contains 10 nodes.If we wanted to add more layers, we'd add more ints to the list. For example, `hidden_units=[10,20,30,40]` would create four layers with ten, twenty, thirty, and forty units, respectively.By default, all hidden layers will use ReLu activation and will be fully connected.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural net regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `DNNRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer,
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor
###Output
_____no_output_____
###Markdown
Task 1: Train a NN Model**Adjust hyperparameters, aiming to drop RMSE below 110.**Run the following block to train a NN model. Recall that in the linear regression exercise with many features, an RMSE of 110 or so was pretty good. We'll aim to beat that.Your task here is to modify various learning settings to improve accuracy on validation data.Overfitting is a real potential hazard for NNs. You can look at the gap between loss on training data and loss on validation data to help judge if your model is starting to overfit. If the gap starts to grow, that is usually a sure sign of overfitting.Because of the number of different possible settings, it's strongly recommended that you take notes on each trial to help guide your development process.Also, when you get a good setting, try running it multiple times and see how repeatable your result is. NN weights are typically initialized to small random values, so you should see differences from run to run.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.01,
steps=500,
batch_size=10,
hidden_units=[10, 2],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 236.55
period 01 : 234.38
period 02 : 232.22
period 03 : 230.08
period 04 : 227.93
period 05 : 225.82
period 06 : 223.68
period 07 : 221.56
period 08 : 219.43
period 09 : 217.32
Model training finished.
Final RMSE (on training data): 217.32
Final RMSE (on validation data): 214.79
###Markdown
SolutionClick below to see a possible solution **NOTE:** This selection of parameters is somewhat arbitrary. Here we've tried combinations that are increasingly complex, combined with training for longer, until the error falls below our objective. This may not be the best combination; others may attain an even lower RMSE. If your aim is to find the model that can attain the best error, then you'll want to use a more rigorous process, like a parameter search.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=2000,
batch_size=100,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 161.21
period 01 : 152.18
period 02 : 145.89
period 03 : 139.68
period 04 : 135.48
period 05 : 131.00
period 06 : 126.09
period 07 : 122.91
period 08 : 130.11
period 09 : 119.87
Model training finished.
Final RMSE (on training data): 119.87
Final RMSE (on validation data): 117.60
###Markdown
Task 2: Evaluate on Test Data**Confirm that your validation performance results hold up on test data.**Once you have a model you're happy with, evaluate it on test data to compare that to validation performance.Reminder, the test data set is located [here](https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv).
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution. Similar to what the code at the top does, we just need to load the appropriate data file, preprocess it and call predict and mean_squared_error.Note that we don't have to randomize the test data, since we will use all records.
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
###Output
_____no_output_____
###Markdown
**Intro to Neural Networks**
--- [](https://mybinder.org/v2/gh/kyle-w-brown/tensorflow-1.x.git/HEAD) **Learning Objectives:** * Define a neural network (NN) and its hidden layers using the TensorFlow `DNNRegressor` class * Train a neural network to learn nonlinearities in a dataset and achieve better performance than a linear regression model In the previous exercises, we used synthetic features to help our model incorporate nonlinearities.One important set of nonlinearities was around latitude and longitude, but there may be others.We'll also switch back, for now, to a standard regression task, rather than the logistic regression task from the previous exercise. That is, we'll be predicting `median_house_value` directly. SetupFirst, let's load and prepare the data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
%tensorflow_version 1.x
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
Training examples summary:
###Markdown
Building a Neural NetworkThe NN is defined by the [DNNRegressor](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNRegressor) class.Use **`hidden_units`** to define the structure of the NN. The `hidden_units` argument provides a list of ints, where each int corresponds to a hidden layer and indicates the number of nodes in it. For example, consider the following assignment:`hidden_units=[3,10]`The preceding assignment specifies a neural net with two hidden layers:* The first hidden layer contains 3 nodes.* The second hidden layer contains 10 nodes.If we wanted to add more layers, we'd add more ints to the list. For example, `hidden_units=[10,20,30,40]` would create four layers with ten, twenty, thirty, and forty units, respectively.By default, all hidden layers will use ReLu activation and will be fully connected.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a neural net regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_nn_regression_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `DNNRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a DNNRegressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
dnn_regressor = tf.estimator.DNNRegressor(
feature_columns=construct_feature_columns(training_examples),
hidden_units=hidden_units,
optimizer=my_optimizer,
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
dnn_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
training_predictions = dnn_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = dnn_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
print("Final RMSE (on training data): %0.2f" % training_root_mean_squared_error)
print("Final RMSE (on validation data): %0.2f" % validation_root_mean_squared_error)
return dnn_regressor
###Output
_____no_output_____
###Markdown
Task 1: Train a NN Model**Adjust hyperparameters, aiming to drop RMSE below 110.**Run the following block to train a NN model. Recall that in the linear regression exercise with many features, an RMSE of 110 or so was pretty good. We'll aim to beat that.Your task here is to modify various learning settings to improve accuracy on validation data.Overfitting is a real potential hazard for NNs. You can look at the gap between loss on training data and loss on validation data to help judge if your model is starting to overfit. If the gap starts to grow, that is usually a sure sign of overfitting.Because of the number of different possible settings, it's strongly recommended that you take notes on each trial to help guide your development process.Also, when you get a good setting, try running it multiple times and see how repeatable your result is. NN weights are typically initialized to small random values, so you should see differences from run to run.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.01,
steps=500,
batch_size=10,
hidden_units=[10, 2],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef8f50890>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1e92219410>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef6b14310>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 00 : 186.33
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef677f050>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef6be4a90>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef6533f50>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 01 : 282.58
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef65b6490>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eec491a10>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eec390cd0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 02 : 167.18
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eec24f850>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eea0746d0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eebe5ca90>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 03 : 230.18
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eec321a90>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef7cb46d0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef8e67850>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 04 : 191.16
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eebf83450>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eebfc1350>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef67982d0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 05 : 156.46
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eec513b50>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ee00f2f50>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eec51fc90>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 06 : 148.02
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef6799490>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef7c966d0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef6bd96d0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 07 : 182.01
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eebdbb2d0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef6c2fe10>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef6641bd0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 08 : 182.16
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef661a810>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eec1a4950>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eec0a5c90>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 09 : 144.25
Model training finished.
Final RMSE (on training data): 144.25
Final RMSE (on validation data): 138.43
###Markdown
SolutionClick below to see a possible solution **NOTE:** This selection of parameters is somewhat arbitrary. Here we've tried combinations that are increasingly complex, combined with training for longer, until the error falls below our objective (training is nondeterministic, so results may fluctuate a bit each time you run the solution). This may not be the best combination; others may attain an even lower RMSE. If your aim is to find the model that can attain the best error, then you'll want to use a more rigorous process, like a parameter search.
###Code
dnn_regressor = train_nn_regression_model(
learning_rate=0.001,
steps=500,
batch_size=10,
hidden_units=[10, 10],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eea092ed0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef6644b90>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef6bf1ad0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 00 : 185.32
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eec4b0850>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef65d6b90>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eebdc9a50>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 01 : 174.54
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef65eb150>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef670b450>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef67edfd0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 02 : 173.21
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef7cc6210>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef8f39390>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef6626ad0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 03 : 168.86
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef6c104d0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef859f1d0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eec4b8890>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 04 : 172.31
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eec555650>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1e92442f50>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef6b9a950>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 05 : 176.44
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eec0ef950>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eea0ccc90>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef8e5f3d0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 06 : 175.89
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ee00b9990>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef6c63e10>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef682d190>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 07 : 173.68
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eec4ea3d0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef6c53f50>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1efdb7c090>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 08 : 167.90
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef7c8e950>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1eec0c1d10>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
WARNING: Entity <bound method _DNNModel.call of <tensorflow_estimator.python.estimator.canned.dnn._DNNModel object at 0x7f1ef8ec5390>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num'
period 09 : 169.85
Model training finished.
Final RMSE (on training data): 169.85
Final RMSE (on validation data): 163.16
###Markdown
Task 2: Evaluate on Test Data**Confirm that your validation performance results hold up on test data.**Once you have a model you're happy with, evaluate it on test data to compare that to validation performance.Reminder, the test data set is located [here](https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv).
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
SolutionClick below to see a possible solution. Similar to what the code at the top does, we just need to load the appropriate data file, preprocess it and call predict and mean_squared_error.Note that we don't have to randomize the test data, since we will use all records.
###Code
california_housing_test_data = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_test.csv", sep=",")
test_examples = preprocess_features(california_housing_test_data)
test_targets = preprocess_targets(california_housing_test_data)
predict_testing_input_fn = lambda: my_input_fn(test_examples,
test_targets["median_house_value"],
num_epochs=1,
shuffle=False)
test_predictions = dnn_regressor.predict(input_fn=predict_testing_input_fn)
test_predictions = np.array([item['predictions'][0] for item in test_predictions])
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(test_predictions, test_targets))
print("Final RMSE (on test data): %0.2f" % root_mean_squared_error)
###Output
_____no_output_____
|
I_ForcingFiles/Rivers/FindKeyRiverScaling.ipynb
|
###Markdown
Table of Contents This is yearly, I should be doing monthly
###Code
import datetime
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
import statsmodels.api as sm
%matplotlib inline
squamish1 = pd.read_csv('Theodosia_Scotty_flow', header=None, sep='\s+', index_col=False,
names=['year', 'month', 'day', 'flow'])
squamish2 = pd.read_csv('Theodosia_Bypass_flow', header=None, sep='\s+', index_col=False,
names=['year', 'month', 'day', 'flow'])
squamish3 = pd.read_csv('Theodosia_Diversion_flow', header=None, sep='\s+', index_col=False,
names=['year', 'month', 'day', 'flow'])
watershed = 'Toba'
for squamish in [squamish1, squamish2, squamish3]:
length = squamish.year.shape[0]
dates = []
for i in range(length):
dates.append(datetime.datetime(squamish.year[i], squamish.month[i], squamish.day[i]))
squamish['dates'] = dates
squamish1 = squamish1.set_index('dates')
squamish2 = squamish2.set_index('dates')
squamish3 = squamish3.set_index('dates')
squamish = squamish1 + squamish3 - squamish2
squamish = pd.read_csv('Salmon_Sayward_flow', header=None, sep='\s+', index_col=False,
names=['year', 'month', 'day', 'flow'])
watershed = 'EVI_N'
length = squamish.year.shape[0]
dates = []
for i in range(length):
dates.append(datetime.datetime(squamish.year[i], squamish.month[i], squamish.day[i]))
squamish['dates'] = dates
squamish = squamish.set_index('dates')
squamish[:5]
fig, ax = plt.subplots(1, 1, figsize=(15, 4))
dateyears = []
squamish[squamish.year < 2010][squamish.year > 1974]['flow'].plot(ax=ax)
squamish[squamish.year < 2010][squamish.year > 1974]['flow'].resample(
'A').mean().plot(ax=ax)
for j, year in enumerate(goodyears):
dateyears.append(datetime.datetime(year, 12, 31))
ax.plot(dateyears, squamish_flux, 'r*-')
ax.set_ylim(0, 1000)
diffy = squamish.index[1:] - squamish.index[:-1]
diffy_data = pd.DataFrame({'date': squamish.index[1:],
'year' : squamish.year[1:],
'gap' : diffy[:]})
diffy_data = diffy_data.set_index('date')
fig, ax = plt.subplots(1, 1, figsize=(15, 4))
plt.plot(squamish.index[1:], diffy, '*-')
year = 1974
plt.xlim(datetime.datetime(year, 1, 1), datetime.datetime(year, 12, 31))
#plt.ylim(0, 1e15)
goodyears = []
for year in range(1970, 2010):
maxgap = diffy_data.gap[diffy_data.year == year].max()
if maxgap == datetime.datetime(1980, 1, 2) - datetime.datetime(1980, 1, 1):
if not np.isnan(squamish.flow[squamish.year == year].mean()):
goodyears.append(year)
squamish_flux = np.zeros(len(goodyears))
for i, year in enumerate(goodyears):
squamish_flux[i] = squamish.flow[squamish.year == year].mean()
plt.plot(goodyears, squamish_flux, '*');
howe_flux = np.zeros(len(goodyears))
for i, year in enumerate(goodyears):
morrison = pd.read_excel('Copy of Flow_Mon_X_Year.xlsx', sheetname=str(year))
howe_flux[i] = morrison['Km^3'][morrison['Water Year'] == watershed]
plt.plot(goodyears, howe_flux, '*');
plt.plot(squamish_flux*365*86400/1e9, howe_flux, '+');
model = LinearRegression(fit_intercept=True)
model.fit(squamish_flux[:, np.newaxis]*365.25*86400/1e9, howe_flux)
xfit = np.linspace(0, 12, 10)
yfit = model.predict(xfit[:, np.newaxis])
plt.scatter(squamish_flux*365.25*86400/1e9, howe_flux)
plt.plot(xfit, yfit, 'r');
X = squamish_flux*365.25*86400/1e9
y = howe_flux
# Fit and make the predictions by the model
model = sm.OLS(y, X).fit()
predictions = model.predict(X)
plt.plot(X, y, 'x')
plt.plot(X, predictions,'*')
# Print out the statistics
print ((np.sqrt(((predictions-y)**2).sum()))/len(goodyears))
model.params
X = squamish_flux*365.25*86400/1e9
X = sm.add_constant(X)
y = howe_flux
# Fit and make the predictions by the model
model = sm.OLS(y, X).fit()
predictions2 = model.predict(X)
plt.plot(X, y, 'x')
plt.plot(X, predictions2,'*')
# Print out the statistics
print ((np.sqrt(((predictions2-y)**2).sum()))/len(goodyears))
print(model.summary())
model.params
X = squamish_flux*365.25*86400/1e9
b = howe_flux.mean() - X.mean()
predictions3 = b + X
# Fit and make the predictions by the model
plt.plot(X, y, 'x')
plt.plot(X, predictions3,'*')
# Print out the statistics
print ((np.sqrt(((predictions3-y)**2).sum()))/len(goodyears))
print (b, '1')
plt.plot(goodyears, predictions,'+-', label='linear')
plt.plot(goodyears, predictions2,'x-', label="with const")
plt.plot(goodyears, howe_flux, 's-', label='Howe')
plt.plot(goodyears, predictions3,'*-', label="1x and const")
plt.legend()
plt.grid()
# Homathko_Mouth for Bute. Multiply by 1.99
# Clowhom_ClowhomLake for Jervis. 6.75 + 4.33 x (6.75*1e9/365.25*86400)
# Squamish_Brackendale for Howe. Multiply by 2.27
# SanJuan_PortRenfrew for JdF. 3.72 + 6.00 x
# Salmon_Sayward for EVI_N, 16.67 + 1.74 x
# Englishman for EVI_S, 6.03 + 10.4 x
# Theodosia for Toba, 5.75 + 7.20 x
# Snohomish for Skagit, 17.57 + 1.36 x
# Nisqually for Puget, 10.75 + 4.33 x
bute_flux = np.zeros((2010-1970+1, 12))
for i, year in enumerate(range(1970, 2010+1)):
morrison = pd.read_excel('Copy of Flow_Mon_X_Year.xlsx', sheetname=str(year))
bute_flux[i, :] = morrison[morrison['Water Year'] == watershed].iloc[:, 4:16]
fig, ax = plt.subplots(1, 1, figsize=(15, 4))
for i in range(12):
ax.plot(np.arange(1970, 2010+1)+i/12.+1/12., bute_flux[:, i], '*');
monthly = squamish['flow'].resample('1M').mean()
ax.plot(monthly.index.year + (monthly.index.month)/12., monthly*30.5*86400/1e9*2, 'x-')
ax.set_xlim(2000, 2010)
###Output
_____no_output_____
|
sklearn-end2end.ipynb
|
###Markdown
Targeting Direct Marketing with Amazon SageMaker and Scikit-Learn--- BackgroundDirect marketing, either through mail, email, phone, etc., is a common tactic to acquire customers. Because resources and a customer's attention is limited, the goal is to only target the subset of prospects who are likely to engage with a specific offer. Predicting those potential customers based on readily available information like demographics, past interactions, and environmental factors is a common machine learning problem.This notebook presents an example problem to predict if a customer will enroll for a term deposit at a bank, after one or more phone calls. The steps include:* Preparing your Amazon SageMaker notebook* Downloading data from the internet into Amazon SageMaker* Investigating and transforming the data so that it can be fed to Amazon SageMaker algorithms* Estimating a model using the Gradient Boosting algorithm* Evaluating the effectiveness of the model* Setting the model up to make on-going predictions--- Preparation_This notebook was created and tested on an ml.m4.xlarge notebook instance._Let's start by specifying:- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
import sagemaker
bucket=sagemaker.Session().default_bucket()
prefix = 'sagemaker/sklearn-end-2end-immday'
# Define IAM role
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Now let's bring in the Python libraries that we'll use throughout the analysis
###Code
import numpy as np # For matrix operations and numerical processing
import pandas as pd # For munging tabular data
import matplotlib.pyplot as plt # For charts and visualizations
from IPython.display import Image # For displaying images in the notebook
from IPython.display import display # For displaying outputs in the notebook
from time import gmtime, strftime # For labeling SageMaker models, endpoints, etc.
import sys # For writing outputs to notebook
import math # For ceiling function
import json # For parsing hosting outputs
import os # For manipulating filepath names
import sagemaker
import zipfile # Amazon SageMaker's Python SDK provides many helper functions
pd.__version__
###Output
_____no_output_____
###Markdown
Make sure pandas version is set to 1.2.4 or later. If it is not the case, restart the kernel before going further --- DataLet's start by downloading the [direct marketing dataset](https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip) from the sample data s3 bucket. \[Moro et al., 2014\] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014
###Code
!wget https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip
with zipfile.ZipFile('bank-additional.zip', 'r') as zip_ref:
zip_ref.extractall('.')
###Output
_____no_output_____
###Markdown
Now lets read this into a Pandas data frame and take a look.
###Code
data = pd.read_csv('./bank-additional/bank-additional-full.csv')
pd.set_option('display.max_columns', 500) # Make sure we can see all of the columns
pd.set_option('display.max_rows', 20) # Keep the output on one page
data
###Output
_____no_output_____
###Markdown
We will store this natively in S3 to then process it with SageMaker Processing.
###Code
from sagemaker import Session
sess = Session()
input_source = sess.upload_data('./bank-additional/bank-additional-full.csv', bucket=bucket, key_prefix=f'{prefix}/input_data')
input_source
###Output
_____no_output_____
###Markdown
Feature Engineering with Amazon SageMaker ProcessingAmazon SageMaker Processing allows you to run steps for data pre- or post-processing, feature engineering, data validation, or model evaluation workloads on Amazon SageMaker. Processing jobs accept data from Amazon S3 as input and store data into Amazon S3 as output.Here, we'll import the dataset and transform it with SageMaker Processing, which can be used to process terabytes of data in a SageMaker-managed cluster separate from the instance running your notebook server. In a typical SageMaker workflow, notebooks are only used for prototyping and can be run on relatively inexpensive and less powerful instances, while processing, training and model hosting tasks are run on separate, more powerful SageMaker-managed instances. SageMaker Processing includes off-the-shelf support for Scikit-learn, as well as a Bring Your Own Container option, so it can be used with many different data transformation technologies and tasks. To use SageMaker Processing, simply supply a Python data preprocessing script as shown below. For this example, we're using a SageMaker prebuilt Scikit-learn container, which includes many common functions for processing data. There are few limitations on what kinds of code and operations you can run, and only a minimal contract: input and output data must be placed in specified directories. If this is done, SageMaker Processing automatically loads the input data from S3 and uploads transformed data back to S3 when the job is complete.
###Code
%%writefile preprocessing.py
import pandas as pd
import numpy as np
import argparse
import os
from sklearn.preprocessing import OrdinalEncoder
def _parse_args():
parser = argparse.ArgumentParser()
# Data, model, and output directories
# model_dir is always passed in from SageMaker. By default this is a S3 path under the default bucket.
parser.add_argument('--filepath', type=str, default='/opt/ml/processing/input/')
parser.add_argument('--filename', type=str, default='bank-additional-full.csv')
parser.add_argument('--outputpath', type=str, default='/opt/ml/processing/output/')
parser.add_argument('--categorical_features', type=str, default='y, job, marital, education, default, housing, loan, contact, month, day_of_week, poutcome')
return parser.parse_known_args()
if __name__=="__main__":
# Process arguments
args, _ = _parse_args()
# Load data
df = pd.read_csv(os.path.join(args.filepath, args.filename))
# Change the value . into _
df = df.replace(regex=r'\.', value='_')
df = df.replace(regex=r'\_$', value='')
# Add two new indicators
df["no_previous_contact"] = (df["pdays"] == 999).astype(int)
df["not_working"] = df["job"].isin(["student", "retired", "unemployed"]).astype(int)
df = df.drop(['duration', 'emp.var.rate', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed'], axis=1)
# Encode the categorical features
df = pd.get_dummies(df)
# Train, test, validation split
train_data, validation_data, test_data = np.split(df.sample(frac=1, random_state=42), [int(0.7 * len(df)), int(0.9 * len(df))]) # Randomly sort the data then split out first 70%, second 20%, and last 10%
# Local store
pd.concat([train_data['y_yes'], train_data.drop(['y_yes','y_no'], axis=1)], axis=1).to_csv(os.path.join(args.outputpath, 'train/train.csv'), index=False, header=False)
pd.concat([validation_data['y_yes'], validation_data.drop(['y_yes','y_no'], axis=1)], axis=1).to_csv(os.path.join(args.outputpath, 'validation/validation.csv'), index=False, header=False)
test_data['y_yes'].to_csv(os.path.join(args.outputpath, 'test/test_y.csv'), index=False, header=False)
test_data.drop(['y_yes','y_no'], axis=1).to_csv(os.path.join(args.outputpath, 'test/test_x.csv'), index=False, header=False)
print("## Processing complete. Exiting.")
###Output
_____no_output_____
###Markdown
Before starting the SageMaker Processing job, we instantiate a `SKLearnProcessor` object. This object allows you to specify the instance type to use in the job, as well as how many instances. Although the Boston Housing dataset is quite small, we'll use two instances to showcase how easy it is to spin up a cluster for SageMaker Processing.
###Code
train_path = f"s3://{bucket}/{prefix}/train"
validation_path = f"s3://{bucket}/{prefix}/validation"
test_path = f"s3://{bucket}/{prefix}/test"
from sagemaker.sklearn.processing import SKLearnProcessor
from sagemaker.processing import ProcessingInput, ProcessingOutput
from sagemaker import get_execution_role
sklearn_processor = SKLearnProcessor(
framework_version="0.23-1",
role=get_execution_role(),
instance_type="ml.m5.large",
instance_count=1,
base_job_name='sm-immday-skprocessing'
)
sklearn_processor.run(
code='preprocessing.py',
inputs=[
ProcessingInput(
source=input_source,
destination="/opt/ml/processing/input",
s3_input_mode="File",
s3_data_distribution_type="ShardedByS3Key"
)
],
outputs=[
ProcessingOutput(
output_name="train_data",
source="/opt/ml/processing/output/train",
destination=train_path,
),
ProcessingOutput(output_name="validation_data", source="/opt/ml/processing/output/validation", destination=validation_path),
ProcessingOutput(output_name="test_data", source="/opt/ml/processing/output/test", destination=test_path),
]
)
###Output
_____no_output_____
###Markdown
--- End of Lab 1 --- TrainingNow we know that most of our features have skewed distributions, some are highly correlated with one another, and some appear to have non-linear relationships with our target variable. Also, for targeting future prospects, good predictive accuracy is preferred to being able to explain why that prospect was targeted. Taken together, these aspects make gradient boosted trees a good candidate algorithm.There are several intricacies to understanding the algorithm, but at a high level, gradient boosted trees works by combining predictions from many simple models, each of which tries to address the weaknesses of the previous models. By doing this the collection of simple models can actually outperform large, complex models. Other Amazon SageMaker notebooks elaborate on gradient boosting trees further and how they differ from similar algorithms.In this notebook we show how to use Amazon SageMaker to develop, train, tune and deploy a Scikit-Learn based ML model (Random Forest). More info on Scikit-Learn can be found [here](https://scikit-learn.org/stable/index.html).
###Code
s3_input_train = sagemaker.inputs.TrainingInput(s3_data=train_path.format(bucket, prefix), content_type='csv')
s3_input_validation = sagemaker.inputs.TrainingInput(s3_data=validation_path.format(bucket, prefix), content_type='csv')
###Output
_____no_output_____
###Markdown
The below script contains both training and inference functionality and can run both in SageMaker Training hardware or locally (desktop, SageMaker notebook, on prem, etc). Detailed guidance here https://sagemaker.readthedocs.io/en/stable/using_sklearn.htmlpreparing-the-scikit-learn-training-script
###Code
%%writefile sklearn-train.py
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from joblib import dump, load
import pandas as pd, numpy as np, os, argparse
# inference function - tells SageMaker how to load the model
def model_fn(model_dir):
clf = load(os.path.join(model_dir, "model.joblib"))
return clf
# Argument parser
def _parse_args():
parser = argparse.ArgumentParser()
# Hyperparameters
parser.add_argument("--n-estimators", type=int, default=10)
parser.add_argument("--min-samples-leaf", type=int, default=3)
# Data, model, and output directories
parser.add_argument("--model-dir", type=str, default=os.environ.get("SM_MODEL_DIR"))
parser.add_argument("--train", type=str, default=os.environ.get("SM_CHANNEL_TRAIN"))
parser.add_argument("--test", type=str, default=os.environ.get("SM_CHANNEL_TEST"))
parser.add_argument("--train-file", type=str, default="train.csv")
parser.add_argument("--test-file", type=str, default="test.csv")
# Parse the arguments
return parser.parse_known_args()
# Main Training Loop
if __name__=="__main__":
# Process arguments
args, _ = _parse_args()
# Load the dataset
train_df = pd.read_csv(os.path.join(args.train, args.train_file))
test_df = pd.read_csv(os.path.join(args.test, args.test_file))
# Separate X and y
X_train, y_train = train_df.drop(train_df.columns[0], axis=1), train_df[train_df.columns[0]]
X_test, y_test = test_df.drop(test_df.columns[0], axis=1), test_df[test_df.columns[0]]
# Define the model and train it
model = RandomForestClassifier(
n_estimators=args.n_estimators, min_samples_leaf=args.min_samples_leaf, n_jobs=-1
)
model.fit(X_train, y_train)
# Evaluate the model performances
print(f'Model Accuracy: {accuracy_score(y_test, model.predict(X_test))}')
dump(model, os.path.join(args.model_dir, 'model.joblib'))
# We use the Estimator from the SageMaker Python SDK
from sagemaker import get_execution_role
from sagemaker.sklearn.estimator import SKLearn
FRAMEWORK_VERSION = "0.23-1"
# Define the Estimator from SageMaker (Script Mode)
sklearn_estimator = SKLearn(
entry_point="sklearn-train.py",
role=get_execution_role(),
instance_count=1,
instance_type="ml.c5.xlarge",
framework_version=FRAMEWORK_VERSION,
base_job_name="rf-scikit",
metric_definitions=[{"Name": "model_accuracy", "Regex": "Model Accuracy: ([0-9.]+).*$"}],
hyperparameters={
"n-estimators": 100,
"min-samples-leaf": 3,
"test-file": "validation.csv"
},
)
# Train the model (~5 minutes)
sklearn_estimator.fit({"train": s3_input_train, "test": s3_input_validation})
###Output
_____no_output_____
###Markdown
--- HostingNow that we've trained the algorithm on our data, let's deploy a model that's hosted behind a real-time endpoint.
###Code
sklearn_predictor = sklearn_estimator.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
--- EvaluationThere are many ways to compare the performance of a machine learning model, but let's start by simply comparing actual to predicted values. In this case, we're simply predicting whether the customer subscribed to a term deposit (`1`) or not (`0`), which produces a simple confusion matrix.First we'll need to determine how we pass data into and receive data from our endpoint. Our data is currently stored as NumPy arrays in memory of our notebook instance. To send it in an HTTP POST request, we'll serialize it as a CSV string and then decode the resulting CSV.*Note: For inference with CSV format, SageMaker XGBoost requires that the data does NOT include the target variable.*
###Code
sklearn_predictor.serializer = sagemaker.serializers.CSVSerializer()
###Output
_____no_output_____
###Markdown
Now, we'll use a simple function to:1. Loop over our test dataset1. Split it into mini-batches of rows 1. Convert those mini-batches to CSV string payloads (notice, we drop the target variable from our dataset first)1. Retrieve mini-batch predictions by invoking the XGBoost endpoint1. Collect predictions and convert from the CSV output our model provides into a NumPy array
###Code
!aws s3 cp $test_path/test_x.csv /tmp/test_x.csv
!aws s3 cp $test_path/test_y.csv /tmp/test_y.csv
test_x = pd.read_csv('/tmp/test_x.csv', names=[f'{i}' for i in range(59)])
test_y = pd.read_csv('/tmp/test_y.csv', names=['y'])
predictions = sklearn_predictor.predict(test_x.values)
###Output
_____no_output_____
###Markdown
Now we'll check our confusion matrix to see how well we predicted versus actuals.
###Code
pd.crosstab(index=test_y['y'].values, columns=np.round(predictions), rownames=['actuals'], colnames=['predictions'])
###Output
_____no_output_____
###Markdown
So, of the ~4000 potential customers, we predicted 136 would subscribe and 94 of them actually did. We also had 389 subscribers who subscribed that we did not predict would. This is less than desirable, but the model can (and should) be tuned to improve this. Most importantly, note that with minimal effort, our model produced accuracies similar to those published [here](http://media.salford-systems.com/video/tutorial/2015/targeted_marketing.pdf)._Note that because there is some element of randomness in the algorithm's subsample, your results may differ slightly from the text written above._ Get Inferences for an Entire Dataset with Batch TransformTo get inferences for an entire dataset, use batch transform. With batch transform, you create a batch transform job using a trained model and the dataset, which must be stored in Amazon S3. Amazon SageMaker saves the inferences in an S3 bucket that you specify when you create the batch transform job. Batch transform manages all of the compute resources required to get inferences. This includes launching instances and deleting them after the batch transform job has completed. Batch transform manages interactions between the data and the model with an object within the instance node called an agent.Use batch transform when you:- Want to get inferences for an entire dataset and index them to serve inferences in real time- Don't need a persistent endpoint that applications (for example, web or mobile apps) can call to get inferences- Don't need the subsecond latency that SageMaker hosted endpoints provide- You can also use batch transform to preprocess your data before using it to train a new model or generate inferences.The following diagram shows the workflow of a batch transform job:To perform a batch transform, create a batch transform job using either the SageMaker console or the API. Provide the following:- The path to the S3 bucket where you've stored the data that you want to transform.- The compute resources that you want SageMaker to use for the transform job. Compute resources are machine learning (ML) compute instances that are managed by SageMaker.- The path to the S3 bucket where you want to store the output of the job.- The name of the SageMaker model that you want to use to create inferences. You must use a model that you have already created either with the [CreateModel](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModel.html) operation or the console.
###Code
transformer_output_path = f"s3://{bucket}/{prefix}/transformer-output"
sklearn_transformer = sklearn_estimator.transformer(
instance_count=1,
instance_type='ml.m5.large',
output_path=transformer_output_path
)
sklearn_transformer.transform(
data=f'{test_path}/test_x.csv',
data_type='S3Prefix',
content_type='text/csv'
)
!aws s3 cp $transformer_output_path/test_x.csv.out /tmp/predictions.txt
import json
with open('/tmp/predictions.txt', 'r') as r:
a = r.read()[1:-1].split(', ')
predictions = np.asarray(a)
pd.crosstab(index=test_y['y'].values, columns=predictions, rownames=['actuals'], colnames=['predictions'])
###Output
_____no_output_____
###Markdown
Automatic model Tuning (optional)Amazon SageMaker automatic model tuning, also known as hyperparameter tuning, finds the best version of a model by running many training jobs on your dataset using the algorithm and ranges of hyperparameters that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by a metric that you choose.For example, suppose that you want to solve a binary classification problem on this marketing dataset. Your goal is to maximize the area under the curve (auc) metric of the algorithm by training an XGBoost Algorithm model. You don't know which values of the eta, alpha, min_child_weight, and max_depth hyperparameters to use to train the best model. To find the best values for these hyperparameters, you can specify ranges of values that Amazon SageMaker hyperparameter tuning searches to find the combination of values that results in the training job that performs the best as measured by the objective metric that you chose. Hyperparameter tuning launches training jobs that use hyperparameter values in the ranges that you specified, and returns the training job with highest auc.
###Code
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
hyperparameter_ranges = {"n-estimators": IntegerParameter(50, 250), "min-samples-leaf": IntegerParameter(1, 10)}
objective_metric_name = 'model_accuracy'
tuner = HyperparameterTuner(sklearn_estimator,
objective_metric_name,
hyperparameter_ranges,
metric_definitions=[{"Name": "model_accuracy", "Regex": "Model Accuracy: ([0-9.]+).*$"}],
objective_type='Maximize',
max_jobs=9,
max_parallel_jobs=3)
tuner.fit({'train': s3_input_train, 'test': s3_input_validation})
boto3.client('sagemaker').describe_hyper_parameter_tuning_job(
HyperParameterTuningJobName=tuner.latest_tuning_job.job_name)['HyperParameterTuningJobStatus']
# return the best training job name
tuner.best_training_job()
# Deploy the best trained or user specified model to an Amazon SageMaker endpoint
tuner_predictor = tuner.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
tuner_predictor.serializer = sagemaker.serializers.CSVSerializer()
# Deploy the best one and predict
predictions = tuner_predictor.predict(test_x.values)
pd.crosstab(index=test_y['y'].values, columns=np.round(predictions), rownames=['actuals'], colnames=['predictions'])
###Output
_____no_output_____
###Markdown
--- ExtensionsThis example analyzed a relatively small dataset, but utilized Amazon SageMaker features such as distributed, managed training and real-time model hosting, which could easily be applied to much larger problems. In order to improve predictive accuracy further, we could tweak value we threshold our predictions at to alter the mix of false-positives and false-negatives, or we could explore techniques like hyperparameter tuning. In a real-world scenario, we would also spend more time engineering features by hand and would likely look for additional datasets to include which contain customer information not available in our initial dataset. (Optional) Clean-upIf you are done with this notebook, please run the cell below. This will remove the hosted endpoint you created and avoid any charges from a stray instance being left on.
###Code
sklearn_predictor.delete_endpoint(delete_endpoint_config=True)
tuner_predictor.delete_endpoint(delete_endpoint_config=True)
###Output
_____no_output_____
###Markdown
Targeting Direct Marketing with Amazon SageMaker and Scikit-Learn--- BackgroundDirect marketing, either through mail, email, phone, etc., is a common tactic to acquire customers. Because resources and a customer's attention is limited, the goal is to only target the subset of prospects who are likely to engage with a specific offer. Predicting those potential customers based on readily available information like demographics, past interactions, and environmental factors is a common machine learning problem.This notebook presents an example problem to predict if a customer will enroll for a term deposit at a bank, after one or more phone calls. The steps include:* Preparing your Amazon SageMaker notebook* Downloading data from the internet into Amazon SageMaker* Investigating and transforming the data so that it can be fed to Amazon SageMaker algorithms* Estimating a model using the Gradient Boosting algorithm* Evaluating the effectiveness of the model* Setting the model up to make on-going predictions--- Preparation_This notebook was created and tested on an ml.m4.xlarge notebook instance._Let's start by specifying:- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with a the appropriate full IAM role arn string(s).
###Code
# cell 01
import sagemaker
bucket=sagemaker.Session().default_bucket()
prefix = 'sagemaker/sklearn-end-2end-immday'
# Define IAM role
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Now let's bring in the Python libraries that we'll use throughout the analysis
###Code
# cell 02
import numpy as np # For matrix operations and numerical processing
import pandas as pd # For munging tabular data
import matplotlib.pyplot as plt # For charts and visualizations
from IPython.display import Image # For displaying images in the notebook
from IPython.display import display # For displaying outputs in the notebook
from time import gmtime, strftime # For labeling SageMaker models, endpoints, etc.
import sys # For writing outputs to notebook
import math # For ceiling function
import json # For parsing hosting outputs
import os # For manipulating filepath names
import sagemaker
import zipfile # Amazon SageMaker's Python SDK provides many helper functions
# cell 03
pd.__version__
###Output
_____no_output_____
###Markdown
Make sure pandas version is set to 1.2.4 or later. If it is not the case, restart the kernel before going further --- DataLet's start by downloading the [direct marketing dataset](https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip) from the sample data s3 bucket. \[Moro et al., 2014\] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014
###Code
# cell 04
!wget https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip
with zipfile.ZipFile('bank-additional.zip', 'r') as zip_ref:
zip_ref.extractall('.')
###Output
_____no_output_____
###Markdown
Now lets read this into a Pandas data frame and take a look.
###Code
# cell 05
data = pd.read_csv('./bank-additional/bank-additional-full.csv')
pd.set_option('display.max_columns', 500) # Make sure we can see all of the columns
pd.set_option('display.max_rows', 20) # Keep the output on one page
data
###Output
_____no_output_____
###Markdown
We will store this natively in S3 to then process it with SageMaker Processing.
###Code
# cell 06
from sagemaker import Session
sess = Session()
input_source = sess.upload_data('./bank-additional/bank-additional-full.csv', bucket=bucket, key_prefix=f'{prefix}/input_data')
input_source
###Output
_____no_output_____
###Markdown
Feature Engineering with Amazon SageMaker ProcessingAmazon SageMaker Processing allows you to run steps for data pre- or post-processing, feature engineering, data validation, or model evaluation workloads on Amazon SageMaker. Processing jobs accept data from Amazon S3 as input and store data into Amazon S3 as output.Here, we'll import the dataset and transform it with SageMaker Processing, which can be used to process terabytes of data in a SageMaker-managed cluster separate from the instance running your notebook server. In a typical SageMaker workflow, notebooks are only used for prototyping and can be run on relatively inexpensive and less powerful instances, while processing, training and model hosting tasks are run on separate, more powerful SageMaker-managed instances. SageMaker Processing includes off-the-shelf support for Scikit-learn, as well as a Bring Your Own Container option, so it can be used with many different data transformation technologies and tasks. To use SageMaker Processing, simply supply a Python data preprocessing script as shown below. For this example, we're using a SageMaker prebuilt Scikit-learn container, which includes many common functions for processing data. There are few limitations on what kinds of code and operations you can run, and only a minimal contract: input and output data must be placed in specified directories. If this is done, SageMaker Processing automatically loads the input data from S3 and uploads transformed data back to S3 when the job is complete.
###Code
# cell 07
%%writefile preprocessing.py
import pandas as pd
import numpy as np
import argparse
import os
from sklearn.preprocessing import OrdinalEncoder
def _parse_args():
parser = argparse.ArgumentParser()
# Data, model, and output directories
# model_dir is always passed in from SageMaker. By default this is a S3 path under the default bucket.
parser.add_argument('--filepath', type=str, default='/opt/ml/processing/input/')
parser.add_argument('--filename', type=str, default='bank-additional-full.csv')
parser.add_argument('--outputpath', type=str, default='/opt/ml/processing/output/')
parser.add_argument('--categorical_features', type=str, default='y, job, marital, education, default, housing, loan, contact, month, day_of_week, poutcome')
return parser.parse_known_args()
if __name__=="__main__":
# Process arguments
args, _ = _parse_args()
# Load data
df = pd.read_csv(os.path.join(args.filepath, args.filename))
# Change the value . into _
df = df.replace(regex=r'\.', value='_')
df = df.replace(regex=r'\_$', value='')
# Add two new indicators
df["no_previous_contact"] = (df["pdays"] == 999).astype(int)
df["not_working"] = df["job"].isin(["student", "retired", "unemployed"]).astype(int)
df = df.drop(['duration', 'emp.var.rate', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed'], axis=1)
# Encode the categorical features
df = pd.get_dummies(df)
# Train, test, validation split
train_data, validation_data, test_data = np.split(df.sample(frac=1, random_state=42), [int(0.7 * len(df)), int(0.9 * len(df))]) # Randomly sort the data then split out first 70%, second 20%, and last 10%
# Local store
pd.concat([train_data['y_yes'], train_data.drop(['y_yes','y_no'], axis=1)], axis=1).to_csv(os.path.join(args.outputpath, 'train/train.csv'), index=False, header=False)
pd.concat([validation_data['y_yes'], validation_data.drop(['y_yes','y_no'], axis=1)], axis=1).to_csv(os.path.join(args.outputpath, 'validation/validation.csv'), index=False, header=False)
test_data['y_yes'].to_csv(os.path.join(args.outputpath, 'test/test_y.csv'), index=False, header=False)
test_data.drop(['y_yes','y_no'], axis=1).to_csv(os.path.join(args.outputpath, 'test/test_x.csv'), index=False, header=False)
print("## Processing complete. Exiting.")
###Output
_____no_output_____
###Markdown
Before starting the SageMaker Processing job, we instantiate a `SKLearnProcessor` object. This object allows you to specify the instance type to use in the job, as well as how many instances. Although the Boston Housing dataset is quite small, we'll use two instances to showcase how easy it is to spin up a cluster for SageMaker Processing.
###Code
# cell 08
train_path = f"s3://{bucket}/{prefix}/train"
validation_path = f"s3://{bucket}/{prefix}/validation"
test_path = f"s3://{bucket}/{prefix}/test"
# cell 09
from sagemaker.sklearn.processing import SKLearnProcessor
from sagemaker.processing import ProcessingInput, ProcessingOutput
from sagemaker import get_execution_role
sklearn_processor = SKLearnProcessor(
framework_version="0.23-1",
role=get_execution_role(),
instance_type="ml.m5.large",
instance_count=1,
base_job_name='sm-immday-skprocessing'
)
sklearn_processor.run(
code='preprocessing.py',
inputs=[
ProcessingInput(
source=input_source,
destination="/opt/ml/processing/input",
s3_input_mode="File",
s3_data_distribution_type="ShardedByS3Key"
)
],
outputs=[
ProcessingOutput(
output_name="train_data",
source="/opt/ml/processing/output/train",
destination=train_path,
),
ProcessingOutput(output_name="validation_data", source="/opt/ml/processing/output/validation", destination=validation_path),
ProcessingOutput(output_name="test_data", source="/opt/ml/processing/output/test", destination=test_path),
]
)
###Output
_____no_output_____
###Markdown
--- End of Lab 1 --- TrainingNow we know that most of our features have skewed distributions, some are highly correlated with one another, and some appear to have non-linear relationships with our target variable. Also, for targeting future prospects, good predictive accuracy is preferred to being able to explain why that prospect was targeted. Taken together, these aspects make gradient boosted trees a good candidate algorithm.There are several intricacies to understanding the algorithm, but at a high level, gradient boosted trees works by combining predictions from many simple models, each of which tries to address the weaknesses of the previous models. By doing this the collection of simple models can actually outperform large, complex models. Other Amazon SageMaker notebooks elaborate on gradient boosting trees further and how they differ from similar algorithms.In this notebook we show how to use Amazon SageMaker to develop, train, tune and deploy a Scikit-Learn based ML model (Random Forest). More info on Scikit-Learn can be found [here](https://scikit-learn.org/stable/index.html).
###Code
# cell 10
s3_input_train = sagemaker.inputs.TrainingInput(s3_data=train_path.format(bucket, prefix), content_type='csv')
s3_input_validation = sagemaker.inputs.TrainingInput(s3_data=validation_path.format(bucket, prefix), content_type='csv')
###Output
_____no_output_____
###Markdown
The below script contains both training and inference functionality and can run both in SageMaker Training hardware or locally (desktop, SageMaker notebook, on prem, etc). Detailed guidance here https://sagemaker.readthedocs.io/en/stable/using_sklearn.htmlpreparing-the-scikit-learn-training-script
###Code
# cell 11
%%writefile sklearn-train.py
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from joblib import dump, load
import pandas as pd, numpy as np, os, argparse
# inference function - tells SageMaker how to load the model
def model_fn(model_dir):
clf = load(os.path.join(model_dir, "model.joblib"))
return clf
# Argument parser
def _parse_args():
parser = argparse.ArgumentParser()
# Hyperparameters
parser.add_argument("--n-estimators", type=int, default=10)
parser.add_argument("--min-samples-leaf", type=int, default=3)
# Data, model, and output directories
parser.add_argument("--model-dir", type=str, default=os.environ.get("SM_MODEL_DIR"))
parser.add_argument("--train", type=str, default=os.environ.get("SM_CHANNEL_TRAIN"))
parser.add_argument("--test", type=str, default=os.environ.get("SM_CHANNEL_TEST"))
parser.add_argument("--train-file", type=str, default="train.csv")
parser.add_argument("--test-file", type=str, default="test.csv")
# Parse the arguments
return parser.parse_known_args()
# Main Training Loop
if __name__=="__main__":
# Process arguments
args, _ = _parse_args()
# Load the dataset
train_df = pd.read_csv(os.path.join(args.train, args.train_file))
test_df = pd.read_csv(os.path.join(args.test, args.test_file))
# Separate X and y
X_train, y_train = train_df.drop(train_df.columns[0], axis=1), train_df[train_df.columns[0]]
X_test, y_test = test_df.drop(test_df.columns[0], axis=1), test_df[test_df.columns[0]]
# Define the model and train it
model = RandomForestClassifier(
n_estimators=args.n_estimators, min_samples_leaf=args.min_samples_leaf, n_jobs=-1
)
model.fit(X_train, y_train)
# Evaluate the model performances
print(f'Model Accuracy: {accuracy_score(y_test, model.predict(X_test))}')
dump(model, os.path.join(args.model_dir, 'model.joblib'))
# cell 12
# We use the Estimator from the SageMaker Python SDK
from sagemaker import get_execution_role
from sagemaker.sklearn.estimator import SKLearn
FRAMEWORK_VERSION = "0.23-1"
# Define the Estimator from SageMaker (Script Mode)
sklearn_estimator = SKLearn(
entry_point="sklearn-train.py",
role=get_execution_role(),
instance_count=1,
instance_type="ml.c5.xlarge",
framework_version=FRAMEWORK_VERSION,
base_job_name="rf-scikit",
metric_definitions=[{"Name": "model_accuracy", "Regex": "Model Accuracy: ([0-9.]+).*$"}],
hyperparameters={
"n-estimators": 100,
"min-samples-leaf": 3,
"test-file": "validation.csv"
},
)
# Train the model (~5 minutes)
sklearn_estimator.fit({"train": s3_input_train, "test": s3_input_validation})
###Output
_____no_output_____
###Markdown
--- HostingNow that we've trained the algorithm on our data, let's deploy a model that's hosted behind a real-time endpoint.
###Code
# cell 13
sklearn_predictor = sklearn_estimator.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
--- EvaluationThere are many ways to compare the performance of a machine learning model, but let's start by simply comparing actual to predicted values. In this case, we're simply predicting whether the customer subscribed to a term deposit (`1`) or not (`0`), which produces a simple confusion matrix.First we'll need to determine how we pass data into and receive data from our endpoint. Our data is currently stored as NumPy arrays in memory of our notebook instance. To send it in an HTTP POST request, we'll serialize it as a CSV string and then decode the resulting CSV.*Note: For inference with CSV format, SageMaker XGBoost requires that the data does NOT include the target variable.*
###Code
# cell 14
sklearn_predictor.serializer = sagemaker.serializers.CSVSerializer()
###Output
_____no_output_____
###Markdown
Now, we'll use a simple function to:1. Loop over our test dataset1. Split it into mini-batches of rows 1. Convert those mini-batches to CSV string payloads (notice, we drop the target variable from our dataset first)1. Retrieve mini-batch predictions by invoking the XGBoost endpoint1. Collect predictions and convert from the CSV output our model provides into a NumPy array
###Code
# cell 15
!aws s3 cp $test_path/test_x.csv /tmp/test_x.csv
!aws s3 cp $test_path/test_y.csv /tmp/test_y.csv
# cell 16
test_x = pd.read_csv('/tmp/test_x.csv', names=[f'{i}' for i in range(59)])
test_y = pd.read_csv('/tmp/test_y.csv', names=['y'])
predictions = sklearn_predictor.predict(test_x.values)
###Output
_____no_output_____
###Markdown
Now we'll check our confusion matrix to see how well we predicted versus actuals.
###Code
# cell 17
pd.crosstab(index=test_y['y'].values, columns=np.round(predictions), rownames=['actuals'], colnames=['predictions'])
###Output
_____no_output_____
###Markdown
So, of the ~4000 potential customers, we predicted 136 would subscribe and 94 of them actually did. We also had 389 subscribers who subscribed that we did not predict would. This is less than desirable, but the model can (and should) be tuned to improve this. Most importantly, note that with minimal effort, our model produced accuracies similar to those published [here](http://media.salford-systems.com/video/tutorial/2015/targeted_marketing.pdf)._Note that because there is some element of randomness in the algorithm's subsample, your results may differ slightly from the text written above._ Get Inferences for an Entire Dataset with Batch TransformTo get inferences for an entire dataset, use batch transform. With batch transform, you create a batch transform job using a trained model and the dataset, which must be stored in Amazon S3. Amazon SageMaker saves the inferences in an S3 bucket that you specify when you create the batch transform job. Batch transform manages all of the compute resources required to get inferences. This includes launching instances and deleting them after the batch transform job has completed. Batch transform manages interactions between the data and the model with an object within the instance node called an agent.Use batch transform when you:- Want to get inferences for an entire dataset and index them to serve inferences in real time- Don't need a persistent endpoint that applications (for example, web or mobile apps) can call to get inferences- Don't need the subsecond latency that SageMaker hosted endpoints provide- You can also use batch transform to preprocess your data before using it to train a new model or generate inferences.The following diagram shows the workflow of a batch transform job:To perform a batch transform, create a batch transform job using either the SageMaker console or the API. Provide the following:- The path to the S3 bucket where you've stored the data that you want to transform.- The compute resources that you want SageMaker to use for the transform job. Compute resources are machine learning (ML) compute instances that are managed by SageMaker.- The path to the S3 bucket where you want to store the output of the job.- The name of the SageMaker model that you want to use to create inferences. You must use a model that you have already created either with the [CreateModel](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModel.html) operation or the console.
###Code
# cell 18
transformer_output_path = f"s3://{bucket}/{prefix}/transformer-output"
sklearn_transformer = sklearn_estimator.transformer(
instance_count=1,
instance_type='ml.m5.large',
output_path=transformer_output_path
)
sklearn_transformer.transform(
data=f'{test_path}/test_x.csv',
data_type='S3Prefix',
content_type='text/csv'
)
# cell 19
!aws s3 cp $transformer_output_path/test_x.csv.out /tmp/predictions.txt
# cell 20
import json
with open('/tmp/predictions.txt', 'r') as r:
a = r.read()[1:-1].split(', ')
predictions = np.asarray(a)
# cell 21
pd.crosstab(index=test_y['y'].values, columns=predictions, rownames=['actuals'], colnames=['predictions'])
###Output
_____no_output_____
###Markdown
Automatic model Tuning (optional)Amazon SageMaker automatic model tuning, also known as hyperparameter tuning, finds the best version of a model by running many training jobs on your dataset using the algorithm and ranges of hyperparameters that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by a metric that you choose.For example, suppose that you want to solve a binary classification problem on this marketing dataset. Your goal is to maximize the area under the curve (auc) metric of the algorithm by training an XGBoost Algorithm model. You don't know which values of the eta, alpha, min_child_weight, and max_depth hyperparameters to use to train the best model. To find the best values for these hyperparameters, you can specify ranges of values that Amazon SageMaker hyperparameter tuning searches to find the combination of values that results in the training job that performs the best as measured by the objective metric that you chose. Hyperparameter tuning launches training jobs that use hyperparameter values in the ranges that you specified, and returns the training job with highest auc.
###Code
# cell 22
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
hyperparameter_ranges = {"n-estimators": IntegerParameter(50, 250), "min-samples-leaf": IntegerParameter(1, 10)}
# cell 23
objective_metric_name = 'model_accuracy'
# cell 24
tuner = HyperparameterTuner(sklearn_estimator,
objective_metric_name,
hyperparameter_ranges,
metric_definitions=[{"Name": "model_accuracy", "Regex": "Model Accuracy: ([0-9.]+).*$"}],
objective_type='Maximize',
max_jobs=9,
max_parallel_jobs=3)
# cell 25
tuner.fit({'train': s3_input_train, 'test': s3_input_validation})
# cell 26
boto3.client('sagemaker').describe_hyper_parameter_tuning_job(
HyperParameterTuningJobName=tuner.latest_tuning_job.job_name)['HyperParameterTuningJobStatus']
# cell 27
# return the best training job name
tuner.best_training_job()
# cell 28
# Deploy the best trained or user specified model to an Amazon SageMaker endpoint
tuner_predictor = tuner.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
tuner_predictor.serializer = sagemaker.serializers.CSVSerializer()
# cell 29
# Deploy the best one and predict
predictions = tuner_predictor.predict(test_x.values)
pd.crosstab(index=test_y['y'].values, columns=np.round(predictions), rownames=['actuals'], colnames=['predictions'])
###Output
_____no_output_____
###Markdown
--- ExtensionsThis example analyzed a relatively small dataset, but utilized Amazon SageMaker features such as distributed, managed training and real-time model hosting, which could easily be applied to much larger problems. In order to improve predictive accuracy further, we could tweak value we threshold our predictions at to alter the mix of false-positives and false-negatives, or we could explore techniques like hyperparameter tuning. In a real-world scenario, we would also spend more time engineering features by hand and would likely look for additional datasets to include which contain customer information not available in our initial dataset. (Optional) Clean-upIf you are done with this notebook, please run the cell below. This will remove the hosted endpoint you created and avoid any charges from a stray instance being left on.
###Code
# cell 30
sklearn_predictor.delete_endpoint(delete_endpoint_config=True)
# cell 31
tuner_predictor.delete_endpoint(delete_endpoint_config=True)
###Output
_____no_output_____
|
web scraping 3.ipynb
|
###Markdown
사이트에서 그린색으로 쓰여진 글자만 찾아서 뽑아낸다.
###Code
nameList = bs.findAll('span',{'class':'green'})
for name in nameList:
print(name.get_text())
###Output
Anna
Pavlovna Scherer
Empress Marya
Fedorovna
Prince Vasili Kuragin
Anna Pavlovna
St. Petersburg
the prince
Anna Pavlovna
Anna Pavlovna
the prince
the prince
the prince
Prince Vasili
Anna Pavlovna
Anna Pavlovna
the prince
Wintzingerode
King of Prussia
le Vicomte de Mortemart
Montmorencys
Rohans
Abbe Morio
the Emperor
the prince
Prince Vasili
Dowager Empress Marya Fedorovna
the baron
Anna Pavlovna
the Empress
the Empress
Anna Pavlovna's
Her Majesty
Baron
Funke
The prince
Anna
Pavlovna
the Empress
The prince
Anatole
the prince
The prince
Anna
Pavlovna
Anna Pavlovna
|
[03 - Results]/dos results ver 1/router fetch/fft-r1-model-prep.ipynb
|
###Markdown
Module Imports for Data Fetiching and Visualization
###Code
import time
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Module Imports for Data Processing
###Code
from sklearn import preprocessing
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
###Output
_____no_output_____
###Markdown
Importing Dataset from GitHub
###Code
dfg1 = pd.read_csv('2-fft-normal-n-0-3-data-r1-good.csv')
dfg2 = pd.read_csv('2-fft-normal-n-0-5-data-r1-good.csv')
dfg3 = pd.read_csv('2-fft-normal-n-0-10-data-r1-good.csv')
dfg4 = pd.read_csv('2-fft-normal-n-0-12-data-r1-good.csv')
dfg5 = pd.read_csv('2-fft-normal-n-0-15-data-r1-good.csv')
dfm1 = pd.read_csv('2-fft-normal-n-0-3-data-r1-mal.csv')
dfm2 = pd.read_csv('2-fft-normal-n-0-5-data-r1-mal.csv')
dfm3 = pd.read_csv('2-fft-normal-n-0-10-data-r1-mal.csv')
dfm4 = pd.read_csv('2-fft-normal-n-0-12-data-r1-mal.csv')
dfm5 = pd.read_csv('2-fft-normal-n-0-15-data-r1-mal.csv')
###Output
_____no_output_____
###Markdown
Characteristics of Dataset
###Code
dfg1
dfm1
df1 = dfg1.append(dfm1, ignore_index=True,sort=False)
df1 = df1.sort_values('timestamp')
df1.to_csv('fft-r1-df1gm.csv',index=False)
df2 = dfg2.append(dfm2, ignore_index=True,sort=False)
df2 = df2.sort_values('timestamp')
df2.to_csv('fft-r1-df2gm.csv',index=False)
df3 = dfg3.append(dfm3, ignore_index=True,sort=False)
df3 = df3.sort_values('timestamp')
df3.to_csv('fft-r1-df3gm.csv',index=False)
df4 = dfg4.append(dfm4, ignore_index=True,sort=False)
df4 = df4.sort_values('timestamp')
df4.to_csv('fft-r1-df4gm.csv',index=False)
df5 = dfg5.append(dfm5, ignore_index=True,sort=False)
df5 = df5.sort_values('timestamp')
df5.to_csv('fft-r1-df5gm.csv',index=False)
print(df1.shape)
print(df2.shape)
print(df3.shape)
print(df4.shape)
print(df5.shape)
df1 = pd.read_csv('fft-r1-df1gm.csv')
df1['sim'] = 1
df2 = pd.read_csv('fft-r1-df2gm.csv')
df2['sim'] = 2
df3 = pd.read_csv('fft-r1-df3gm.csv')
df3['sim'] = 3
df4 = pd.read_csv('fft-r1-df4gm.csv')
df4['sim'] = 4
df5 = pd.read_csv('fft-r1-df5gm.csv')
df5['sim'] = 5
df1
df = df1.append(df2, ignore_index=True,sort=False)
df = df.append(df3, ignore_index=True,sort=False)
df = df.append(df4, ignore_index=True,sort=False)
df['sim_traversal'] = df['sim']*df['traversal_id']
df.shape
df5['sim_traversal'] = df5['sim']*df5['traversal_id']
df5.to_csv('fft-r1-df5gms.csv',index=False)
df
dft = df.sort_values('timestamp')
dft
df.to_csv('fft-r1-df.csv',index=False)
dft.to_csv('fft-r1-dft.csv',index=False)
###Output
_____no_output_____
|
_posts/python-v3/fundamentals/legends/legend.ipynb
|
###Markdown
New to Plotly?Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).You can set up Plotly to work in [online](https://plot.ly/python/getting-started/initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! Version CheckPlotly's Python API is updated frequently. Run pip install plotly --upgrade to update your Plotly version.
###Code
import plotly
plotly.__version__
###Output
_____no_output_____
###Markdown
Show LegendBy default the legend is displayed on Plotly charts with multiple traces.
###Code
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
fig = go.Figure(data=data)
py.iplot(fig, filename='default-legend')
###Output
_____no_output_____
###Markdown
Add `showlegend=True` to the `layout` object to display the legend on a plot with a single trace.
###Code
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
data = [trace0]
layout = go.Layout(showlegend=True)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='show-legend')
###Output
_____no_output_____
###Markdown
Hide Legend
###Code
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
layout = go.Layout(showlegend=False)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='hide-legend')
###Output
_____no_output_____
###Markdown
Hide Legend Entries
###Code
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
showlegend=False
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
fig = go.Figure(data=data)
py.iplot(fig, filename='hide-legend-entry')
###Output
_____no_output_____
###Markdown
Legend Names
###Code
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
name='Positive'
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
name='Negative'
)
data = [trace0, trace1]
fig = go.Figure(data=data)
py.iplot(fig, filename='legend-names')
###Output
_____no_output_____
###Markdown
Horizontal Legend
###Code
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
layout = go.Layout(
legend=dict(orientation="h")
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='horizontal-legend')
###Output
_____no_output_____
###Markdown
Legend Position
###Code
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
layout = go.Layout(
legend=dict(x=-.1, y=1.2)
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='position-legend')
###Output
_____no_output_____
###Markdown
Style Legend
###Code
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
layout = go.Layout(
legend=dict(
x=0,
y=1,
traceorder='normal',
font=dict(
family='sans-serif',
size=12,
color='#000'
),
bgcolor='#E2E2E2',
bordercolor='#FFFFFF',
borderwidth=2
)
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='style-legend')
###Output
_____no_output_____
###Markdown
Grouped Legend
###Code
import plotly.plotly as py
data = [
{
'x': [1, 2, 3],
'y': [2, 1, 3],
'legendgroup': 'group', # this can be any string, not just "group"
'name': 'first legend group',
'mode': 'markers',
'marker': {
'color': 'rgb(164, 194, 244)'
}
},
{
'x': [1, 2, 3],
'y': [2, 2, 2],
'legendgroup': 'group',
'name': 'first legend group - average',
'mode': 'lines',
'line': {
'color': 'rgb(164, 194, 244)'
}
},
{
'x': [1, 2, 3],
'y': [4, 9, 2],
'legendgroup': 'group2',
'name': 'second legend group',
'mode': 'markers',
'marker': {
'color': 'rgb(142, 124, 195)'
}
},
{
'x': [1, 2, 3],
'y': [5, 5, 5],
'legendgroup': 'group2',
'name': 'second legend group - average',
'mode': 'lines',
'line': {
'color': 'rgb(142, 124, 195)'
}
}
]
py.iplot(data, filename='basic-legend-grouping')
###Output
_____no_output_____
###Markdown
You can also hide entries in grouped legends:
###Code
import plotly.plotly as py
data = [
{
'x': [1, 2, 3],
'y': [2, 1, 3],
'legendgroup': 'group',
'name': 'first legend group',
'mode': 'markers',
'marker': {
'color': 'rgb(164, 194, 244)'
}
},
{
'x': [1, 2, 3],
'y': [2, 2, 2],
'legendgroup': 'group',
'name': 'first legend group - average',
'mode': 'lines',
'line': {
'color': 'rgb(164, 194, 244)'
},
'showlegend': False
},
{
'x': [1, 2, 3],
'y': [4, 9, 2],
'legendgroup': 'group2',
'name': 'second legend group',
'mode': 'markers',
'marker': {
'color': 'rgb(142, 124, 195)'
}
},
{
'x': [1, 2, 3],
'y': [5, 5, 5],
'legendgroup': 'group2',
'name': 'second legend group - average',
'mode': 'lines',
'line': {
'color': 'rgb(142, 124, 195)'
},
'showlegend': False
}
]
py.iplot(data, filename='hiding-entries-from-grouped-legends')
###Output
_____no_output_____
###Markdown
Dash Example [Dash](https://plot.ly/products/dash/) is an Open Source Python library which can help you convert plotly figures into a reactive, web-based application. Below is a simple example of a dashboard created using Dash. Its [source code](https://github.com/plotly/simple-example-chart-apps/tree/master/dash-legend) can easily be deployed to a PaaS.
###Code
from IPython.display import IFrame
IFrame(src= "https://dash-simple-apps.plotly.host/dash-legend/", width="100%", height="820px", frameBorder="0")
from IPython.display import IFrame
IFrame(src= "https://dash-simple-apps.plotly.host/dash-legend/code", width="100%", height=500, frameBorder="0")
###Output
_____no_output_____
###Markdown
ReferenceSee https://plot.ly/python/reference/layout-legend for more information!
###Code
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'legend.ipynb', 'python/legend/', 'Legends | plotly',
'How to configure and style the legend in Plotly with Python.',
title = 'Legends | plotly',
name = 'Legends',
thumbnail='thumbnail/legends.gif', language='python',
has_thumbnail='true', display_as='file_settings', order=13,
ipynb='~notebook_demo/14')
###Output
_____no_output_____
###Markdown
New to Plotly?Plotly's Python library is free and open source! [Get started](https://plotly.com/python/getting-started/) by downloading the client and [reading the primer](https://plotly.com/python/getting-started/).You can set up Plotly to work in [online](https://plotly.com/python/getting-started/initialization-for-online-plotting) or [offline](https://plotly.com/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plotly.com/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! Version CheckPlotly's Python API is updated frequently. Run pip install plotly --upgrade to update your Plotly version.
###Code
import plotly
plotly.__version__
###Output
_____no_output_____
###Markdown
Show LegendBy default the legend is displayed on Plotly charts with multiple traces.
###Code
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
fig = go.Figure(data=data)
py.iplot(fig, filename='default-legend')
###Output
_____no_output_____
###Markdown
Add `showlegend=True` to the `layout` object to display the legend on a plot with a single trace.
###Code
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
data = [trace0]
layout = go.Layout(showlegend=True)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='show-legend')
###Output
_____no_output_____
###Markdown
Hide Legend
###Code
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
layout = go.Layout(showlegend=False)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='hide-legend')
###Output
_____no_output_____
###Markdown
Hide Legend Entries
###Code
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
showlegend=False
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
fig = go.Figure(data=data)
py.iplot(fig, filename='hide-legend-entry')
###Output
_____no_output_____
###Markdown
Legend Names
###Code
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
name='Positive'
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
name='Negative'
)
data = [trace0, trace1]
fig = go.Figure(data=data)
py.iplot(fig, filename='legend-names')
###Output
_____no_output_____
###Markdown
Horizontal Legend
###Code
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
layout = go.Layout(
legend=dict(orientation="h")
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='horizontal-legend')
###Output
_____no_output_____
###Markdown
Legend Position
###Code
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
layout = go.Layout(
legend=dict(x=-.1, y=1.2)
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='position-legend')
###Output
_____no_output_____
###Markdown
Style Legend
###Code
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
layout = go.Layout(
legend=dict(
x=0,
y=1,
traceorder='normal',
font=dict(
family='sans-serif',
size=12,
color='#000'
),
bgcolor='#E2E2E2',
bordercolor='#FFFFFF',
borderwidth=2
)
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='style-legend')
###Output
_____no_output_____
###Markdown
Grouped Legend
###Code
import plotly.plotly as py
data = [
{
'x': [1, 2, 3],
'y': [2, 1, 3],
'legendgroup': 'group', # this can be any string, not just "group"
'name': 'first legend group',
'mode': 'markers',
'marker': {
'color': 'rgb(164, 194, 244)'
}
},
{
'x': [1, 2, 3],
'y': [2, 2, 2],
'legendgroup': 'group',
'name': 'first legend group - average',
'mode': 'lines',
'line': {
'color': 'rgb(164, 194, 244)'
}
},
{
'x': [1, 2, 3],
'y': [4, 9, 2],
'legendgroup': 'group2',
'name': 'second legend group',
'mode': 'markers',
'marker': {
'color': 'rgb(142, 124, 195)'
}
},
{
'x': [1, 2, 3],
'y': [5, 5, 5],
'legendgroup': 'group2',
'name': 'second legend group - average',
'mode': 'lines',
'line': {
'color': 'rgb(142, 124, 195)'
}
}
]
py.iplot(data, filename='basic-legend-grouping')
###Output
_____no_output_____
###Markdown
You can also hide entries in grouped legends:
###Code
import plotly.plotly as py
data = [
{
'x': [1, 2, 3],
'y': [2, 1, 3],
'legendgroup': 'group',
'name': 'first legend group',
'mode': 'markers',
'marker': {
'color': 'rgb(164, 194, 244)'
}
},
{
'x': [1, 2, 3],
'y': [2, 2, 2],
'legendgroup': 'group',
'name': 'first legend group - average',
'mode': 'lines',
'line': {
'color': 'rgb(164, 194, 244)'
},
'showlegend': False
},
{
'x': [1, 2, 3],
'y': [4, 9, 2],
'legendgroup': 'group2',
'name': 'second legend group',
'mode': 'markers',
'marker': {
'color': 'rgb(142, 124, 195)'
}
},
{
'x': [1, 2, 3],
'y': [5, 5, 5],
'legendgroup': 'group2',
'name': 'second legend group - average',
'mode': 'lines',
'line': {
'color': 'rgb(142, 124, 195)'
},
'showlegend': False
}
]
py.iplot(data, filename='hiding-entries-from-grouped-legends')
###Output
_____no_output_____
###Markdown
Dash Example [Dash](https://plotly.com/products/dash/) is an Open Source Python library which can help you convert plotly figures into a reactive, web-based application. Below is a simple example of a dashboard created using Dash. Its [source code](https://github.com/plotly/simple-example-chart-apps/tree/master/dash-legend) can easily be deployed to a PaaS.
###Code
from IPython.display import IFrame
IFrame(src= "https://dash-simple-apps.plotly.host/dash-legend/", width="100%", height="820px", frameBorder="0")
from IPython.display import IFrame
IFrame(src= "https://dash-simple-apps.plotly.host/dash-legend/code", width="100%", height=500, frameBorder="0")
###Output
_____no_output_____
###Markdown
ReferenceSee https://plotly.com/python/reference/layout-legend for more information!
###Code
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'legend.ipynb', 'python/legend/', 'Legends | plotly',
'How to configure and style the legend in Plotly with Python.',
title = 'Legends | plotly',
name = 'Legends',
thumbnail='thumbnail/legends.gif', language='python',
has_thumbnail='true', display_as='file_settings', order=13,
ipynb='~notebook_demo/14')
###Output
_____no_output_____
|
lectures/Instability of parameter estimates.ipynb
|
###Markdown
Instability of Parameter EstimatesBy Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie. Algorithms by David Edwards.Part of the Quantopian Lecture Series:* [www.quantopian.com/lectures](https://www.quantopian.com/lectures)* [github.com/quantopian/research_public](https://github.com/quantopian/research_public)Notebook released under the Creative Commons Attribution 4.0 License.--- ParametersA parameter is anything that a model uses to constrain its predictions. Commonly, a parameter is quantity that describes a data set or distribution is a parameter. For example, the mean of a normal distribution is a parameter; in fact, we say that a normal distribution is parametrized by its mean and variance. If we take the mean of a set of samples drawn from the normal distribution, we get an estimate of the mean of the distribution. Similarly, the mean of a set of observations is an estimate of the parameter of the underlying distribution (which is often assumed to be normal). Other parameters include the median, the correlation coefficient to another series, the standard deviation, and every other measurement of a data set. You Never Know, You Only EstimateWhen you take the mean of a data set, you do not know the mean. You have estimated the mean as best you can from the data you have. The estimate can be off. This is true of any parameter you estimate. To actually understand what is going on you need to determine how good your estimate is by looking at its stability/standard error/confidence intervals. Instability of estimatesWhenever we consider a set of observations, our calculation of a parameter can only be an estimate. It will change as we take more measurements or as time passes and we get new observations. We can quantify the uncertainty in our estimate by looking at how the parameter changes as we look at different subsets of the data. For instance, standard deviation describes how different the mean of a set is from the mean of each observation, that is, from each observation itself. In financial applications, data often comes in time series. In this case, we can estimate a parameter at different points in time; say, for the previous 30 days. By looking at how much this moving estimate fluctuates as we change our time window, we can compute the instability of the estimated parameter.
###Code
# We'll be doing some examples, so let's import the libraries we'll need
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Example: mean and standard deviationFirst, let's take a look at some samples from a normal distribution. We know that the mean of the distribution is 0 and the standard deviation is 1; but if we measure the parameters from our observations, we will get only approximately 0 and approximately 1. We can see how these estimates change as we take more and more samples:
###Code
# Set a seed so we can play with the data without generating new random numbers every time
np.random.seed(123)
normal = np.random.randn(500)
print np.mean(normal[:10])
print np.mean(normal[:100])
print np.mean(normal[:250])
print np.mean(normal)
# Plot a stacked histogram of the data
plt.hist([normal[:10], normal[10:100], normal[100:250], normal], normed=1, histtype='bar', stacked=True);
plt.ylabel('Frequency')
plt.xlabel('Value');
print np.std(normal[:10])
print np.std(normal[:100])
print np.std(normal[:250])
print np.std(normal)
###Output
1.2363048015
1.12824047048
1.01746043683
1.00320285616
###Markdown
Notice that, although the probability of getting closer to 0 and 1 for the mean and standard deviation, respectively, increases with the number of samples, we do not always get better estimates by taking more data points. Whatever our expectation is, we can always get a different result, and our goal is often to compute the probability that the result is significantly different than expected.With time series data, we usually care only about contiguous subsets of the data. The moving average (also called running or rolling) assigns the mean of the previous $n$ data points to each point in time. Below, we compute the 90-day moving average of a stock price and plot it to see how it changes. There is no result in the beginning because we first have to accumulate at least 90 days of data. Example: Non-Normal Underlying DistributionWhat happens if the underlying data isn't normal? A mean will be very deceptive. Because of this it's important to test for normality of your data. We'll use a Jarque-Bera test as an example.
###Code
#Generate some data from a bi-modal distribution
def bimodal(n):
X = np.zeros((n))
for i in range(n):
if np.random.binomial(1, 0.5) == 0:
X[i] = np.random.normal(-5, 1)
else:
X[i] = np.random.normal(5, 1)
return X
X = bimodal(1000)
#Let's see how it looks
plt.hist(X, bins=50)
plt.ylabel('Frequency')
plt.xlabel('Value')
print 'mean:', np.mean(X)
print 'standard deviation:', np.std(X)
###Output
mean: 0.00984758128215
standard deviation: 5.06070874011
###Markdown
Sure enough, the mean is increidbly non-informative about what is going on in the data. We have collapsed all of our data into a single estimate, and lost of a lot of information doing so. This is what the distribution should look like if our hypothesis that it is normally distributed is correct.
###Code
mu = np.mean(X)
sigma = np.std(X)
N = np.random.normal(mu, sigma, 1000)
plt.hist(N, bins=50)
plt.ylabel('Frequency')
plt.xlabel('Value');
###Output
_____no_output_____
###Markdown
We'll test our data using the Jarque-Bera test to see if it's normal. A significant p-value indicates non-normality.
###Code
from statsmodels.stats.stattools import jarque_bera
jarque_bera(X)
###Output
_____no_output_____
###Markdown
Sure enough the value is < 0.05 and we say that X is not normal. This saves us from accidentally making horrible predictions. Example: Sharpe ratioOne statistic often used to describe the performance of assets and portfolios is the Sharpe ratio, which measures the additional return per unit additional risk achieved by a portfolio, relative to a risk-free source of return such as Treasury bills:$$R = \frac{E[r_a - r_b]}{\sqrt{Var(r_a - r_b)}}$$where $r_a$ is the returns on our asset and $r_b$ is the risk-free rate of return. As with mean and standard deviation, we can compute a rolling Sharpe ratio to see how our estimate changes through time.
###Code
def sharpe_ratio(asset, riskfree):
return np.mean(asset - riskfree)/np.std(asset - riskfree)
start = '2012-01-01'
end = '2015-01-01'
# Use an ETF that tracks 3-month T-bills as our risk-free rate of return
treasury_ret = get_pricing('BIL', fields='price', start_date=start, end_date=end).pct_change()[1:]
pricing = get_pricing('AMZN', fields='price', start_date=start, end_date=end)
returns = pricing.pct_change()[1:] # Get the returns on the asset
# Compute the running Sharpe ratio
running_sharpe = [sharpe_ratio(returns[i-90:i], treasury_ret[i-90:i]) for i in range(90, len(returns))]
# Plot running Sharpe ratio up to 100 days before the end of the data set
_, ax1 = plt.subplots()
ax1.plot(range(90, len(returns)-100), running_sharpe[:-100]);
ticks = ax1.get_xticks()
ax1.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
plt.xlabel('Date')
plt.ylabel('Sharpe Ratio');
###Output
_____no_output_____
###Markdown
The Sharpe ratio looks rather volatile, and it's clear that just reporting it as a single value will not be very helpful for predicting future values. Instead, we can compute the mean and standard deviation of the data above, and then see if it helps us predict the Sharpe ratio for the next 100 days.
###Code
# Compute the mean and std of the running Sharpe ratios up to 100 days before the end
mean_rs = np.mean(running_sharpe[:-100])
std_rs = np.std(running_sharpe[:-100])
# Plot running Sharpe ratio
_, ax2 = plt.subplots()
ax2.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
ax2.plot(range(90, len(returns)), running_sharpe)
# Plot its mean and the +/- 1 standard deviation lines
ax2.axhline(mean_rs)
ax2.axhline(mean_rs + std_rs, linestyle='--')
ax2.axhline(mean_rs - std_rs, linestyle='--')
# Indicate where we computed the mean and standard deviations
# Everything after this is 'out of sample' which we are comparing with the estimated mean and std
ax2.axvline(len(returns) - 100, color='pink');
plt.xlabel('Date')
plt.ylabel('Sharpe Ratio')
plt.legend(['Sharpe Ratio', 'Mean', '+/- 1 Standard Deviation'])
print 'Mean of running Sharpe ratio:', mean_rs
print 'std of running Sharpe ratio:', std_rs
###Output
Mean of running Sharpe ratio: 0.0646215053325
std of running Sharpe ratio: 0.0778015776531
###Markdown
The standard deviation in this case is about a quarter of the range, so this data is extremely volatile. Taking this into account when looking ahead gave a better prediction than just using the mean, although we still observed data more than one standard deviation away. We could also compute the rolling mean of the Sharpe ratio to try and follow trends; but in that case, too, we should keep in mind the standard deviation. Example: Moving AverageLet's say you take the average with a lookback window; how would you determine the standard error on that estimate? Let's start with an example showing a 90-day moving average.
###Code
# Load time series of prices
start = '2012-01-01'
end = '2015-01-01'
pricing = get_pricing('AMZN', fields='price', start_date=start, end_date=end)
# Compute the rolling mean for each day
mu = pd.rolling_mean(pricing, window=90)
# Plot pricing data
_, ax1 = plt.subplots()
ax1.plot(pricing)
ticks = ax1.get_xticks()
ax1.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
plt.ylabel('Price')
plt.xlabel('Date')
# Plot rolling mean
ax1.plot(mu);
plt.legend(['Price','Rolling Average']);
###Output
_____no_output_____
###Markdown
This lets us see the instability/standard error of the mean, and helps anticipate future variability in the data. We can quantify this variability by computing the mean and standard deviation of the rolling mean.
###Code
print 'Mean of rolling mean:', np.mean(mu)
print 'std of rolling mean:', np.std(mu)
###Output
Mean of rolling mean: 288.399003348
std of rolling mean: 51.1188097398
###Markdown
In fact, the standard deviation, which we use to quantify variability, is itself variable. Below we plot the rolling standard deviation (for a 90-day window), and compute its mean and standard deviation.
###Code
# Compute rolling standard deviation
std = pd.rolling_std(pricing, window=90)
# Plot rolling std
_, ax2 = plt.subplots()
ax2.plot(std)
ax2.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
plt.ylabel('Standard Deviation of Moving Average')
plt.xlabel('Date')
print 'Mean of rolling std:', np.mean(std)
print 'std of rolling std:', np.std(std)
###Output
Mean of rolling std: 17.3969897999
std of rolling std: 7.54619079684
###Markdown
To see what this changing standard deviation means for our data set, let's plot the data again along with the Bollinger bands: the rolling mean, one rolling standard deviation (of the data) above the mean, and one standard deviation below. Note that although standard deviations give us more information about the spread of the data, we cannot assign precise probabilities to our expectations for future observations without assuming a particular distribution for the underlying process.
###Code
# Plot original data
_, ax3 = plt.subplots()
ax3.plot(pricing)
ax3.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
# Plot Bollinger bands
ax3.plot(mu)
ax3.plot(mu + std)
ax3.plot(mu - std);
plt.ylabel('Price')
plt.xlabel('Date')
plt.legend(['Price', 'Moving Average', 'Moving Average +1 Std', 'Moving Average -1 Std'])
###Output
_____no_output_____
###Markdown
To see what this changing standard deviation means for our data set, let's plot the data again along with the Bollinger bands: the rolling mean, one rolling standard deviation (of the data) above the mean, and one standard deviation below. Note that although standard deviations give us more information about the spread of the data, we cannot assign precise probabilities to our expectations for future observations without assuming a particular distribution for the underlying process.
###Code
# Plot original data
_, ax3 = plt.subplots()
ax3.plot(pricing)
ax3.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
# Plot Bollinger bands
ax3.plot(mu)
ax3.plot(mu + std)
ax3.plot(mu - std);
plt.ylabel('Price')
plt.xlabel('Date')
plt.legend(['Price', 'Moving Average', 'Moving Average +1 Std', 'Moving Average -1 Std'])
###Output
_____no_output_____
###Markdown
Instability of Parameter EstimatesBy Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie. Algorithms by David Edwards.Part of the Quantopian Lecture Series:* [www.quantopian.com/lectures](https://www.quantopian.com/lectures)* [github.com/quantopian/research_public](https://github.com/quantopian/research_public)Notebook released under the Creative Commons Attribution 4.0 License.--- ParametersA parameter is anything that a model uses to constrain its predictions. Commonly, a parameter is quantity that describes a data set or distribution is a parameter. For example, the mean of a normal distribution is a parameter; in fact, we say that a normal distribution is parametrized by its mean and variance. If we take the mean of a set of samples drawn from the normal distribution, we get an estimate of the mean of the distribution. Similarly, the mean of a set of observations is an estimate of the parameter of the underlying distribution (which is often assumed to be normal). Other parameters include the median, the correlation coefficient to another series, the standard deviation, and every other measurement of a data set. You Never Know, You Only EstimateWhen you take the mean of a data set, you do not know the mean. You have estimated the mean as best you can from the data you have. The estimate can be off. This is true of any parameter you estimate. To actually understand what is going on you need to determine how good your estimate is by looking at its stability/standard error/confidence intervals. Instability of estimatesWhenever we consider a set of observations, our calculation of a parameter can only be an estimate. It will change as we take more measurements or as time passes and we get new observations. We can quantify the uncertainty in our estimate by looking at how the parameter changes as we look at different subsets of the data. For instance, standard deviation describes how different the mean of a set is from the mean of each observation, that is, from each observation itself. In financial applications, data often comes in time series. In this case, we can estimate a parameter at different points in time; say, for the previous 30 days. By looking at how much this moving estimate fluctuates as we change our time window, we can compute the instability of the estimated parameter.
###Code
# We'll be doing some examples, so let's import the libraries we'll need
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Example: mean and standard deviationFirst, let's take a look at some samples from a normal distribution. We know that the mean of the distribution is 0 and the standard deviation is 1; but if we measure the parameters from our observations, we will get only approximately 0 and approximately 1. We can see how these estimates change as we take more and more samples:
###Code
# Set a seed so we can play with the data without generating new random numbers every time
np.random.seed(123)
normal = np.random.randn(500)
print np.mean(normal[:10])
print np.mean(normal[:100])
print np.mean(normal[:250])
print np.mean(normal)
# Plot a stacked histogram of the data
plt.hist([normal[:10], normal[10:100], normal[100:250], normal], normed=1, histtype='bar', stacked=True);
plt.ylabel('Frequency')
plt.xlabel('Value');
print np.std(normal[:10])
print np.std(normal[:100])
print np.std(normal[:250])
print np.std(normal)
###Output
1.2363048015
1.12824047048
1.01746043683
1.00320285616
###Markdown
Notice that, although the probability of getting closer to 0 and 1 for the mean and standard deviation, respectively, increases with the number of samples, we do not always get better estimates by taking more data points. Whatever our expectation is, we can always get a different result, and our goal is often to compute the probability that the result is significantly different than expected.With time series data, we usually care only about contiguous subsets of the data. The moving average (also called running or rolling) assigns the mean of the previous $n$ data points to each point in time. Below, we compute the 90-day moving average of a stock price and plot it to see how it changes. There is no result in the beginning because we first have to accumulate at least 90 days of data. Example: Non-Normal Underlying DistributionWhat happens if the underlying data isn't normal? A mean will be very deceptive. Because of this it's important to test for normality of your data. We'll use a Jarque-Bera test as an example.
###Code
#Generate some data from a bi-modal distribution
def bimodal(n):
X = np.zeros((n))
for i in range(n):
if np.random.binomial(1, 0.5) == 0:
X[i] = np.random.normal(-5, 1)
else:
X[i] = np.random.normal(5, 1)
return X
X = bimodal(1000)
#Let's see how it looks
plt.hist(X, bins=50)
plt.ylabel('Frequency')
plt.xlabel('Value')
print 'mean:', np.mean(X)
print 'standard deviation:', np.std(X)
###Output
mean: 0.00984758128215
standard deviation: 5.06070874011
###Markdown
Sure enough, the mean is increidbly non-informative about what is going on in the data. We have collapsed all of our data into a single estimate, and lost of a lot of information doing so. This is what the distribution should look like if our hypothesis that it is normally distributed is correct.
###Code
mu = np.mean(X)
sigma = np.std(X)
N = np.random.normal(mu, sigma, 1000)
plt.hist(N, bins=50)
plt.ylabel('Frequency')
plt.xlabel('Value');
###Output
_____no_output_____
###Markdown
We'll test our data using the Jarque-Bera test to see if it's normal. A significant p-value indicates non-normality.
###Code
from statsmodels.stats.stattools import jarque_bera
jarque_bera(X)
###Output
_____no_output_____
###Markdown
Sure enough the value is < 0.05 and we say that X is not normal. This saves us from accidentally making horrible predictions. Example: Sharpe ratioOne statistic often used to describe the performance of assets and portfolios is the Sharpe ratio, which measures the additional return per unit additional risk achieved by a portfolio, relative to a risk-free source of return such as Treasury bills:$$R = \frac{E[r_a - r_b]}{\sqrt{Var(r_a - r_b)}}$$where $r_a$ is the returns on our asset and $r_b$ is the risk-free rate of return. As with mean and standard deviation, we can compute a rolling Sharpe ratio to see how our estimate changes through time.
###Code
def sharpe_ratio(asset, riskfree):
return np.mean(asset - riskfree)/np.std(asset - riskfree)
start = '2012-01-01'
end = '2015-01-01'
# Use an ETF that tracks 3-month T-bills as our risk-free rate of return
treasury_ret = get_pricing('BIL', fields='price', start_date=start, end_date=end).pct_change()[1:]
pricing = get_pricing('AMZN', fields='price', start_date=start, end_date=end)
returns = pricing.pct_change()[1:] # Get the returns on the asset
# Compute the running Sharpe ratio
running_sharpe = [sharpe_ratio(returns[i-90:i], treasury_ret[i-90:i]) for i in range(90, len(returns))]
# Plot running Sharpe ratio up to 100 days before the end of the data set
_, ax1 = plt.subplots()
ax1.plot(range(90, len(returns)-100), running_sharpe[:-100]);
ticks = ax1.get_xticks()
ax1.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
plt.xlabel('Date')
plt.ylabel('Sharpe Ratio');
###Output
_____no_output_____
###Markdown
The Sharpe ratio looks rather volatile, and it's clear that just reporting it as a single value will not be very helpful for predicting future values. Instead, we can compute the mean and standard deviation of the data above, and then see if it helps us predict the Sharpe ratio for the next 100 days.
###Code
# Compute the mean and std of the running Sharpe ratios up to 100 days before the end
mean_rs = np.mean(running_sharpe[:-100])
std_rs = np.std(running_sharpe[:-100])
# Plot running Sharpe ratio
_, ax2 = plt.subplots()
ax2.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
ax2.plot(range(90, len(returns)), running_sharpe)
# Plot its mean and the +/- 1 standard deviation lines
ax2.axhline(mean_rs)
ax2.axhline(mean_rs + std_rs, linestyle='--')
ax2.axhline(mean_rs - std_rs, linestyle='--')
# Indicate where we computed the mean and standard deviations
# Everything after this is 'out of sample' which we are comparing with the estimated mean and std
ax2.axvline(len(returns) - 100, color='pink');
plt.xlabel('Date')
plt.ylabel('Sharpe Ratio')
plt.legend(['Sharpe Ratio', 'Mean', '+/- 1 Standard Deviation'])
print 'Mean of running Sharpe ratio:', mean_rs
print 'std of running Sharpe ratio:', std_rs
###Output
Mean of running Sharpe ratio: 0.0646215053325
std of running Sharpe ratio: 0.0778015776531
###Markdown
The standard deviation in this case is about a quarter of the range, so this data is extremely volatile. Taking this into account when looking ahead gave a better prediction than just using the mean, although we still observed data more than one standard deviation away. We could also compute the rolling mean of the Sharpe ratio to try and follow trends; but in that case, too, we should keep in mind the standard deviation. Example: Moving AverageLet's say you take the average with a lookback window; how would you determine the standard error on that estimate? Let's start with an example showing a 90-day moving average.
###Code
# Load time series of prices
start = '2012-01-01'
end = '2015-01-01'
pricing = get_pricing('AMZN', fields='price', start_date=start, end_date=end)
# Compute the rolling mean for each day
mu = pd.rolling_mean(pricing, window=90)
# Plot pricing data
_, ax1 = plt.subplots()
ax1.plot(pricing)
ticks = ax1.get_xticks()
ax1.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
plt.ylabel('Price')
plt.xlabel('Date')
# Plot rolling mean
ax1.plot(mu);
plt.legend(['Price','Rolling Average']);
###Output
_____no_output_____
###Markdown
This lets us see the instability/standard error of the mean, and helps anticipate future variability in the data. We can quantify this variability by computing the mean and standard deviation of the rolling mean.
###Code
print 'Mean of rolling mean:', np.mean(mu)
print 'std of rolling mean:', np.std(mu)
###Output
Mean of rolling mean: 288.399003348
std of rolling mean: 51.1188097398
###Markdown
In fact, the standard deviation, which we use to quantify variability, is itself variable. Below we plot the rolling standard deviation (for a 90-day window), and compute its mean and standard deviation.
###Code
# Compute rolling standard deviation
std = pd.rolling_std(pricing, window=90)
# Plot rolling std
_, ax2 = plt.subplots()
ax2.plot(std)
ax2.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
plt.ylabel('Standard Deviation of Moving Average')
plt.xlabel('Date')
print 'Mean of rolling std:', np.mean(std)
print 'std of rolling std:', np.std(std)
###Output
Mean of rolling std: 17.3969897999
std of rolling std: 7.54619079684
|
CONTENT/PYTHON/_Jupyter-notebooks/archived-files/1_basics.ipynb
|
###Markdown
ISRC Python Workshop: Basics I Introduction  Interpreted Language*You can type your command and get the response instantly.*Example:  Popular Language for Data Analysis  Variables*Vairables can be considered containers. You can put anything inside a container, without specifying the size or type, which will be needed in Java or C. Note that Python is case-sensitive. Be careful about using letters in different cases.*
###Code
x = 3 # integer
y = 3. # floating point number
z = "Hello!" # strings
Z = "Wonderful!" # another string, stored in a variable big z.
print(x)
print(y)
print(z)
print(Z)
###Output
3
3.0
Hello!
Wonderful!
###Markdown
*You can do operations on numeric values as well as strings.*
###Code
sum_ = x + y # int + float = float
v = "World!"
sum_string = z + " " + v # concatenate strings
print(sum_)
print(sum_string)
###Output
6.0
Hello! World!
###Markdown
*Print with formating*
###Code
print("The sum of x and y is %f"%sum_)
###Output
The sum of x and y is 6.000000
###Markdown
*__Some notes on Strings__**To initialize a string variable, you can use either double or single quotes.*
###Code
store_name = "HyVee"
###Output
_____no_output_____
###Markdown
*You can think of strings as a sequence of characters. In this case, indices and bracket notations can be used to access specific ranges of characters.*
###Code
name_13 = store_name[1:4] # [start, end), end is exclusive; Python starts with 0 NOT 1
print(name_13)
last_letter = store_name[-1] # -1 means the last element
print(last_letter)
###Output
yVe
e
###Markdown
Control Logics*In the following examples, we show examples of comparison, if-else loop, for loop, and while loop.* Comparison
###Code
print(store_name == "HyVee") # Will return a boolean value True or False
print(sum_ < 0)
###Output
True
False
###Markdown
If-Else
###Code
if store_name != "Walmart":
print("The store is not Walmart. It's " + store_name + ".")
else:
print("The store is Walmart.")
if sum_ == 0:
print("sum_ is 0")
elif sum_ < 0:
print("sum_ is less than 0")
else:
print("sum_ is above 0 and its value is " + str(sum_)) # Cast sum_ into string type.
###Output
sum_ is above 0 and its value is 6.0
###Markdown
For loop: Iterating thru a sequence
###Code
for letter in store_name:
print(letter)
# Use index to access specific elements
# range() is a function to create interger sequences
print("range(5) gives: " + str(range(5))) # By default starts from 0
print("range(1,9) gives: " + str(range(1, 9))) # From 1 to 9-1 (Again the end index is exclusive.)
for index in range(len(store_name)): # length of a sequence
print("The %ith letter in store_name is: "%index + store_name[index])
###Output
H
y
V
e
e
range(5) gives: [0, 1, 2, 3, 4]
range(1,9) gives: [1, 2, 3, 4, 5, 6, 7, 8]
The 0th letter in store_name is: H
The 1th letter in store_name is: y
The 2th letter in store_name is: V
The 3th letter in store_name is: e
The 4th letter in store_name is: e
###Markdown
While loop: Keep doing until condition no longer holds.*Use __for__ when you know the exact number of iterations; use __while__ when you do not (e.g., checking convergence).*
###Code
flag = True
index = 0
while flag:
print(store_name[index])
index += 1 # a += b means a = a + b
if index >= len(store_name):
flag = False # if we get to the last element of string, the condition no longer holds
print("The End!")
###Output
H
y
V
e
e
The End!
###Markdown
Notes: Keyword *break* and *continue* *break* means get out of the loop immediately. Any code after the break will NOT be executed
###Code
flag = True
index = 0
while flag:
print(store_name[index])
index += 1 # a += b means a = a + b
if store_name[index] == "V":
print("End at V")
break # instead of setting flag to False, we can directly break out of the loop
print("Hello!") # This will NOT be run
###Output
H
y
End at V
###Markdown
*continue means get to the next iteration of loop. It is __breaking__ the current iteration and __continue__ to the next.*
###Code
for letter in store_name:
if letter == "V":
continue # Not printing V
else:
print(letter)
###Output
H
y
e
e
###Markdown
Data Structures*In this section, we show some major data structures in Python.* List*Initialize a list with brackets. You can store anything in a list, even if they are different types*
###Code
a_list = [1, 2, 3] # commas to seperate elements
print("Length of a_list is: %i"%(len(a_list)))
print("The 3rd element of a_list is: %s" %(a_list[2])) # Remember Python starts with 0
print("The sum of a_list is %.2f"%(sum(a_list)))
b_list = [20, True, "good", "good"] # We can put different types in a list
###Output
Length of a_list is: 3
The 3rd element of a_list is: 3
The sum of a_list is 6.00
###Markdown
*Update a list: __pop__, __remove__, __append__, __extend__*
###Code
print("Pop %i out of a_list"%a_list.pop(1)) # pop the value of an index
print(a_list)
print("Remove the string good from b_list:")
b_list.remove("good") # remove a specific value (the first one in the list)
print(b_list)
a_list.append(10)
print("After appending a new value, a_list is now: %s"%(str(a_list)))
# merge a_list and b_list
a_list.extend(b_list)
## This is equivalent to a_list += b_list
print("Merging a_list and b_list: %s"%(str(a_list)))
print("We can also use + to concatenate two lists: a_list + b_list = %s"%(a_list+b_list))
###Output
Pop 2 out of a_list
[1, 3]
Remove the string good from b_list:
[20, True, 'good']
After appending a new value, a_list is now: [1, 3, 10]
Merging a_list and b_list: [1, 3, 10, 20, True, 'good']
We can also use + to concatenate two lists: a_list + b_list = [1, 3, 10, 20, True, 'good', 20, True, 'good']
###Markdown
Tuple (A special case of list whose elements cannot be changed)*Initialize a tuple with paranthesis. The only difference between list and tuple is that you can alter list but not tuple.*
###Code
a_tuple = (1, 2, 3, 10)
print(a_tuple)
print("First element of a_tuple: %i"%a_tuple[0])
# You cannot change the values of a_tuple
a_tuple[0] = 5
###Output
(1, 2, 3, 10)
First element of a_tuple: 1
###Markdown
Dictionary: key-value pairs*Initialize a dict by curly brackets*
###Code
d = {} # empty dictionary
d[1] = "1 value" # add a key-value by using bracket (key). You can put anything in key or value.
print(d)
# Use for loop to add values
for index in range(2, 10):
d[index] = "%i value"%index
print(d)
print("All the keys: " + str(d.keys()))
print("All the values: " + str(d.values()))
for key in d:
print "Key is: %i, Value is : %s"%(key, d[key])
###Output
{1: '1 value'}
{1: '1 value', 2: '2 value', 3: '3 value', 4: '4 value', 5: '5 value', 6: '6 value', 7: '7 value', 8: '8 value', 9: '9 value'}
All the keys: [1, 2, 3, 4, 5, 6, 7, 8, 9]
All the values: ['1 value', '2 value', '3 value', '4 value', '5 value', '6 value', '7 value', '8 value', '9 value']
Key is: 1, Value is : 1 value
Key is: 2, Value is : 2 value
Key is: 3, Value is : 3 value
Key is: 4, Value is : 4 value
Key is: 5, Value is : 5 value
Key is: 6, Value is : 6 value
Key is: 7, Value is : 7 value
Key is: 8, Value is : 8 value
Key is: 9, Value is : 9 value
###Markdown
Functions*Now we can write our first function by combining all we have above.**Function is a block of codes with input arguments (and, optionally, return values) for specific purposes.*
###Code
def mySum(list_to_sum):
return sum(list_to_sum)
def mySumUsingLoop(list_to_sum):
sum_ = list_to_sum[0]
for item in list_to_sum[1:]:
sum_ += item
return sum_
#################################
print(mySum(range(5)))
print(mySumUsingLoop(range(5)))
###Output
10
10
###Markdown
FIle I/O*This section is about some basics on reading and writing data to your hard disks.* Write data to a file
###Code
f = open("./tmp.csv", "w") # f is a file handler, while "w" is the mode (w for write)
data = range(10)
for item in data:
f.write(str(item))
f.write("\n") # add newline character
f.close()
###Output
_____no_output_____
###Markdown
Read data to a file
###Code
f = open("./tmp.csv", "r") # this time, use read mode
contents = [item for item in f] # list comprehension. This is the same as for-loop but more concise
contents = [item.strip("\n") for item in contents] # strip the newline
print(contents)
int_values = map(int, contents) # map the values into integer type
print(int_values)
f.close()
###Output
['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
###Markdown
Libraries Built-in Libraries*Python provides many built-in packages to prevent extra work on some common and useful functions**We will use __math__ as an example.*
###Code
# use import to load a library
import math
x = 3
print("e^x = e^3 = %f"%math.exp(x))
print("log(x) = log(3) = %f"%math.log(x))
# You can import a specific function
from math import exp
print(exp(x)) # This way, you don't need to use math.exp but just exp
# Import all functions
from math import *
print(exp(x))
print(log(x)) # Try these two before importing math
###Output
20.0855369232
20.0855369232
1.09861228867
###Markdown
External Libraries*There are times you'll want some advanced utility functions not provided by Python. There are many useful packages by developers.**We'll use __numpy__ as an example. (__numpy__, __scipy__, __matplotlib__,and probably __pandas__ will be of the most importance to you for data analyses.**Installation of packages for Python is the easiest using pip.*
###Code
# After you install numpy, load it
import numpy as np # you can use np instead of numpy to call the functions in numpy package
x = np.array([[1,2,3], [4,5,6]], dtype=np.float) # create a numpy array object, specify the data type as float
print(x)
# Scipy/Numpy provides extensive utilities to manipulate data and simple analysis
from scipy.stats import pearsonr, spearmanr # correlation functions
print(pearsonr(x[1, :], x[0, :]))
print(spearmanr(x[1, :], x[0, :]))
###Output
(1.0, 0.0)
SpearmanrResult(correlation=1.0, pvalue=0.0)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.